This technologically-enhanced feedback loop aims to enable each partner to be more aware of the other’s emotions and corroborate it with context data, in a way that could result in a gradual training of their “empathy muscle." Results from our experimental studies show that users experience an increased level of attention, as well as awareness of self and others.
We are exploring applications of Project Us as part of Diversity and Inclusion initiatives in the workplace (e.g., training of inclusive behaviors and inclusive leadership), but are also interested in other domains, such as enhancing the effectiveness of telemedicine and mental health interventions, supporting personal relations and conflict resolution.
If you are interested in collaborating, please reach out to contact@projectus.ai
This is a prototype of a university-based research project. We are continuing to develop the system in terms of accuracy, security, end-to-end user experience, and applications. We would greatly appreciate your feedback.
Vision
Support better conversations and relationships through empathy building.
Inspiration
Empathy—our ability to feel someone's emotional state while preserving the knowledge about its personal origin—stands at the core of our existence as humans. It has dramatically contributed to our evolution as a species, and remains a key driver of the way we experience life. Being empathetic can make us more effective at work, less stressed, it can improve our relationship satisfaction, give us a deeper sense of connection and attachment. Still, we sometimes find it difficult to empathize with others, and for some poorly understood reasons, some people tend to face more challenges than others. Technologically-enabled solutions, ranging from virtual reality (VR) to tangible avatars, have shown promise in this direction. Yet, existing techniques tend to be difficult and expensive to deliver (e.g., requiring VR headsets), and often disconnected from daily life.
State of the Project
Us consists of two modules that can be used either separately, or jointly. Our results indicate that users experience an increased level of attention and awareness of self and others for both of these modules used separately.
1. Virtual interface (Us.virtual) – can run during any virtual interaction (e.g., Zoom), extract insights about emotions from the conversation (from signals such as speech, tone, and facial expressions) and discretely feed it back through an on-screen display. This tool has been tested in a user study with 60 participants (see section publications).