Loading stock data...

OpenAI Announces Latest Updates

OpenAI

ChatGPT users now have more control over their data with the introduction of new features that allow them to manage their chat history and security.

Turn Off Chat History

We are excited to announce that users can now turn off chat history in ChatGPT. This feature allows you to choose which conversations can be used to train our models, providing an added layer of control over your data.

How it Works

Conversations started when chat history is disabled won’t be used to train and improve our models, and won’t appear in the history sidebar. These controls can be found in ChatGPT’s settings and can be changed at any time. This means that you have more flexibility to manage your data and choose which conversations are used to train our models.

Data Retention and Security

When chat history is disabled, we will retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting. This ensures that your data remains secure and is not used without your consent.

Existing Opt-Out Process

We understand that some users may prefer an opt-out process for their data. While our existing opt-out process still exists, we hope that this new feature provides a more user-friendly way to manage your data.

Introducing the Bug Bounty Program

As we continue to develop safe and advanced AI technology, we recognize the importance of security in our systems. To help ensure the safety and reliability of our technology, we are introducing the OpenAI Bug Bounty Program.

What is the Bug Bounty Program?

The Bug Bounty Program is a way for us to recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure. We invite you to report vulnerabilities, bugs, or security flaws you discover in our systems.

How Does it Work?

By sharing your findings, you will play a crucial role in making our technology safer for everyone. The Bug Bounty Program page provides more information on how to participate and what rewards are available.

Our Approach to AI Safety

At OpenAI, we take the safety of our AI technology seriously. Before releasing any new system, we conduct rigorous testing, engage external experts for feedback, work to improve the model’s behavior with techniques like reinforcement learning with human feedback, and build broad safety and monitoring systems.

Example: GPT-4

After our latest model, GPT-4, finished training, we spent more than 6 months working across the organization to make it safer and more aligned prior to releasing it publicly. This demonstrates our commitment to ensuring that our AI technology is safe and reliable.

Regulation

We believe that powerful AI systems should be subject to rigorous safety evaluations. Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.

Conclusion

Our approach to AI safety is centered around creating technology and services that are secure, reliable, and trustworthy. We recognize the importance of security in our systems and invite you to join us in making our technology safer for everyone through the Bug Bounty Program.

ChatGPT Plugins

We’ve implemented initial support for plugins in ChatGPT. Plugins are tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.

What Are Plugins?

Plugins are an essential part of our iterative deployment philosophy. We are gradually rolling out plugins in ChatGPT so we can study their real-world use, impact, and safety and alignment challenges—all of which we’ll have to get right in order to achieve our mission.

Conclusion

We hope that these new features provide you with more control over your data and a safer experience in ChatGPT. Our commitment to AI safety is unwavering, and we are excited to continue developing safe and advanced AI technology with the help of security researchers through the Bug Bounty Program.

Related Articles