Erfan Dastournejad

Commentary by Heidi Biggs

For his thesis, Erfan examined the ambiguous, awkward, and emerging arena of socializing in avatar-oriented virtual reality experiences like Meta’s (Facebook’s) Metaverse and Microsoft’s AltspaceVR. As part of his initial research, Erfan performed walkthroughs of these platforms and their social spaces. He found that the social or context cues we would normally use to enter a conversation or gain a sense of belonging simply don’t exist in the same way in the metaverse. For example, if one goes to a VIP rooftop bar in Tokyo (this was Erfan’s example) — this bar is a unique context from which one can picture the kind of people who might be there and what type of conversations might transpire. In other cases, one might know to join a conversation based on body language and eye contact. However, social environments in VR as it stands don’t give a user many clues who they are talking to, the context of the conversation, and VR and avatars don’t generate life-like body language or social signaling, so Erfan decided to design ways to build context and anticipation such that conversations are easier to join in open, social VR spaces. 

In contrast to other VR platforms which organize contexts spatially, for example, there might be a virtual place one goes to talk about politics. Erfan imagined users could build context aside from spatial relations, through topic-signaling. Erfan designed and developed a VR prototype which is a proof of concept for topic-based conversational clusters in VR. In his design, users meet to talk and create what I think of as ‘topic islands’ because when avatars start talking together, a little bubble forms on the ground around them signaling they are having a conversation. As they talk, speech bubbles appear over their heads which contain keywords of the topics they are raising, and supporting media accumulates around the conversation, which someone could peruse as they virtually stroll by. Topic makers and media generated by the conversants build context so that someone outside the conversation might understand what is being discussed and join in. Erfan also designed ways to feel the anticipation of joining a conversation. Anticipation is generated through an adorable interaction where, as one gets close to a conversation, the circle surrounding the conversants deforms and reaches out to the person who is considering joining, while simultaneously a small circle under their ‘feet’ reaches out toward the conversation circle, building anticipation. If the person joins the conversation their circle is absorbed and the conversion circle is enlarged. 

Space, place, and embodied cues are intimately linked to human understanding, and Erfan is trying to push the expectations of how humans might navigate VR context otherwise. One thing Erfan is adamant about is that VR is not the same, as ‘real life,’ and should be explored on its own terms. He uncovers a fascinating tension in his exploration by orienting to ideas, not larger-scale spatial arrangements. I can’t help but think of the design of a city, or a building, and how integral spatial mappings are to wayfinding and understanding, and how when they are gone, things can feel chaotic, trippy, or dreamy (Alice and Wonderland?). It would be super interesting to bring this ‘context island’ model to scale and explore emergent or creative ways to navigate multiple / many context-driven conversations in virtual space.