top of page
profile pic_edited.jpg

HCI, future of work, digital labor, social computing, sustainability

Research Areas

ℹ️ I am on the job market! Seeking tenure-track and postdoctoral roles in CS, HCI, or Information Science schools.

  • external-content.duckduckgo.com
  • LinkedIn
  • Twitter

Hello!

I am a doctoral candidate in the Information Science department at Cornell University. ​ My research falls in the intersection of Human Computer Interaction (HCI),  Computer Science (CS), and  Development Sociology.  I  study and design technologies that minimize inequities in digital labor, especially for underserved communities. I take a mixed-methods approach with an aim to translate my research insights into impactful technological solutions. You can read my work in leading conferences, including CHI, CSCW, ICTD, & COMPASS.  All of my research has been generously funded by Engaged, Einaudi, and Mozilla grants. I also actively mentor under-resourced students and provide research assistance for leading non-profits. Please feel free to send me an email at rv288 at cornell [dot] edu.

UPDATES

Jan'23:

New publication @CHI'23 around Responsible AI challenges encountered by practitioners in big tech.

Oct'22:

Recieved the prestegious DLI fellowship.

Apr'22:

Will be working as student research intern at Google in responsible AI team starting this summer

Mar'22:

Recieved honorable mention for our CHI'22 paper around women crowd workers.

Feb'22:

Another CHI'22 paper around motivations & challenges of first-time women crowd workers is in.

Dec'21:

CHI'22 paper exploring misinformation engagement practices by rural communities is out.

Nov'21:

Provided my voice in a new article about skyrocketting market of block programming in India. about

LATEST WORK

 CHI'23  

“It is currently hodgepodge”: Examining AI/ML Practitioners’ Challenges during Co-production of Responsible AI Values

Recently AI/ML research community has indicated an urgent need to establish Responsible AI (RAI) values and practices as part of the AI/ML lifecycle. Several organizations and communities are responding to this call by sharing RAI guidelines. However, there are gaps in awareness, deliberation, and execution of such practices for multi-disciplinary ML practitioners. This work contributes to the discussion by unpacking challenges faced by practitioners as they align their RAI values. We interviewed 23 individuals, across 10 organizations, tasked to ship AI/ML based products while upholding RAI norms and found that both top-down and bottom-up institutional structures burden selective roles to uphold RAI values, a challenge that is further exacerbated when executing conflicted values. We share multiple value levers used as strategies by the practitioners to resolve their challenges. We end our paper with recommendations for inclusive and equitable RAI value-practices, creating supportive organizational-structures and opportunities to further aid practitioners.

Pre-print (Soon!)
bottom of page