Insights from global conversations

3 min read Original article ↗

We know that in order to fulfill our mission to build AI systems that benefit everyone, we need to spend significant time directly engaging people who are interacting with and affected by our technology. This is why, for four weeks in May and June, an OpenAI team led by our CEO Sam Altman traveled to 25 cities across 6 continents to speak with users, developers, policymakers, and the public, and hear about each community’s priorities for AI development and deployment. We are deeply grateful to the hosts and partners who helped make our trip a success.

The trip has helped us better understand the perspectives of users, developers, and government leaders around the world. With their feedback in mind, we are placing additional focus on these areas:

Making our products more useful, impactful, and accessible.The trip clarified our sense of what it takes for our products to be accessible and useful for users and developers around the world. We are working on changes to make it easier for people to guide our models toward responses that reflect a wider variety of individual needs and local cultures and contexts. We are also working toward better performance for languages other than English, considering not only lab benchmarks, but also how accurately and efficiently our models perform in the real-world deployment scenarios that matter most to our developers. And we are committed to continuing to make our pricing structure accessible for developers around the world.

Further developing best practices for governing highly capable foundation models.As the public debate over new AI laws and regulations continues, we will redouble our efforts to pilot and refine concrete governance practices specifically tailored to highly capable foundation models like the ones that we produce. This includes critical safety work streams such as pre-deployment safety evaluation(opens in a new window) and adversarial testing(opens in a new window), and new efforts to empower people to track the provenance of AI-generated content. Such measures will, we believe, be important components of a governance ecosystem for AI, alongside long-established laws and sector-specific policies for some important applications. We will also continue to invest in piloting broad-based public input approaches into our deployment decisions, adding in localization features to our systems, and fostering an international research community to expand and strengthen the evaluation of model capabilities and risks, including via external research on our AI systems and our cybersecurity grants program.

Working to unlock AI’s benefits.We will be expanding our efforts to support broad AI literacy, which we heard is a priority for many communities, as well as investing in ways for creators, publishers and content producers to benefit from these new technologies so we can continue to have a healthy digital ecosystem. Additionally, we are building teams that can provide more support to organizations that are exploring how to use our tools for broadly beneficial applications, and conducting research into and policy recommendations for the social and economic implications(opens in a new window) of the systems(opens in a new window) we(opens in a new window) build(opens in a new window).

We will have more to say in the weeks and months ahead in each of these areas. A warm thank you to everyone around the world who shared their perspectives and experiences with us.