(Update: Part I is here; Part II is here)
If you found out a friend was refusing to vaccinate her children, would you connect her with a group convinced vaccines cause autism? If a young Shiite in a Beirut slum told you about the evils of Sunnis, would you recommend they watch Hezbollah’s news channel? If your uncle made a racist remark, would you send him a video on why diversity is white genocide? Thousands of times each day, Big Tech firms fail these simple moral questions, and the impact on society has been dire: more polarization, more disinformation, more hatred, more violence, and more deaths.
As alums of these companies, we at Omelas have seen firsthand the disastrous results of blind faith in tech, of assuming all users will be well intentioned, and of feel-good cultural mantras that work in privileged, utopic bubbles but not the real world. In a four-part series of how ethics can be integrated seamlessly in both the tech stack and our business decision-making processes, we’ll be discussing the lessons we’ve taken from the broader technology industry to make sure we build our tech ethically and realize the social, not just financial, outcomes we started Omelas to achieve. We strongly believe in the value of open debate, and encourage you to send any feedback or ethical questions for us to discuss to pr@omelas.co.