Biggest risks of gen AI in your private life: ChatGPT, Gemini, Copilot

0
12
Biggest risks of gen AI in your private life: ChatGPT, Gemini, Copilot

Many consumers are enamored with generative AI, using new tools to handle a variety of personal or business matters.

But many people overlook the potential privacy implications, which can be significant.

From OpenAI's ChatGPT to Google's Gemini to Microsoft Copilot software and the new Apple Intelligence, the number of consumer-ready artificial intelligence tools has proliferated. However, these tools have different privacy policies regarding the use of user data and its retention. In many cases, consumers are unaware of how their data is or can be used.

This is where being an informed consumer becomes extremely important. Depending on the tool, there's different granularity in what you can control, said Jodi Daniels, CEO and privacy consultant at Red Clover Advisors, which advises companies on privacy issues. “Not all tools have universal opt-out capabilities,” Daniels said.

The proliferation of artificial intelligence tools—and their integration into many things consumers do on PCs and smartphones—makes these questions even more relevant. For example, a few months ago, Microsoft released its first Surface PC, which had a dedicated Copilot button on the keyboard for quick access to the chatbot, fulfilling its promise a few months ago. Apple, for its part, last month outlined its artificial intelligence vision — centered around several smaller models running on Apple devices and chips. Company executives speak openly about the company's emphasis on privacy, which can be a challenge for artificial intelligence models.

In the new era of artificial intelligence, consumers can protect their privacy in the following ways.

Ask AI the privacy questions it must be able to answer

Before choosing a tool, consumers should carefully read the relevant privacy policy. How is your information used and how might it be used? Is there an option to turn off data sharing? Is there a way to limit which data is used and how long it is retained? Can data be deleted? Do users have to go to great lengths to find the opt-out setting?

Privacy professionals say if you can't easily answer these questions or can't find the answers in the provider's privacy policy, that should raise a red flag.

“A tool that cares about privacy will tell you,” Daniels said.

If not, “you have to take ownership of it,” Daniels added. “You can't just assume that a company will do the right thing. Every company has different values ​​and every company makes money differently.”

She gave the example of Grammarly, an editing tool used by many consumers and businesses. A company that clearly explains In several places on its website How data is used.

Keep sensitive data away from large language models

Some people put a lot of faith in plugging sensitive data into generative AI models, but Andrew Frost Moroz, founder of privacy-focused browser Aloha Browser, advises people not to enter any kind of sensitive data because they don't really understand how it's being used or May be misused.

This is true for all types of information people might enter, whether it's personal or work-related. Many companies have serious concerns about employees using AI models to help with their work, as employees may not consider how the models use the information for training. If you are entering a confidential document, AI models can now access it, which could cause all kinds of concerns. Many companies will only approve customized versions of gen AI tools to create a firewall between proprietary information and large language models.

Frost Moroz said individuals should also exercise caution and not use AI models for anything that is not public or that you would not want to share with others in any capacity. It’s important to understand how to use artificial intelligence. If you're using it to summarize a Wikipedia article, this might not be a problem. However, this is not recommended if you are using it to summarize personal legal documents, for example. Or, let's say you have an image of a document and you want to copy a specific paragraph. You can ask AI to read text so you can copy it. By doing this, the AI ​​model will understand the content of the document, so consumers need to keep that in mind, he said.

Use opt-outs provided by OpenAI, Google

Each generation of AI tools has its own privacy policy and may have opt-out options. For example, Gemini allows users to create retention period and deletion of certain information and other activity controls.

Users can Choose not to have ChatGPT use their data for model training. To do this, they need to navigate to their profile icon in the lower left corner of the page and select “Data Controls” under the “Settings” heading. Then, they need to disable the “Improve the model for everyone” feature. According to the FAQ on OpenAI, while this feature has been deactivated, new conversations will not be used to train ChatGPT’s models website.

Jacob Hoffman-Andrews, a senior technical expert at the Electronic Frontier Foundation, an international nonprofit digital rights organization, said there is no real benefit to consumers from allowing a new generation of artificial intelligence to be trained on their data, and there are Risks still under study.

If a profile is posted inappropriately online, consumers may be able to delete it and it will disappear from search engines. But not training an AI model is an entirely different matter, he said. He said there may be ways to reduce the use of certain information once it enters an AI model, but it's not foolproof and how to do this effectively is an area of ​​active research.

Only opt in for a good reason, such as using Microsoft Copilot

Companies are integrating artificial intelligence into everyday tools that people use in their personal and professional lives. For example, Copilot for Microsoft 365 can operate in Word, Excel, and PowerPoint to help users complete tasks such as analysis, idea generation, and organization.

For these tools, Microsoft said Do not share consumer data Works with third parties without permission and will not use customer data to train Copilot or its artificial intelligence features without consent.

However, users can opt in if they wish by logging into the Power Platform admin center, selecting Settings, Tenant Settings and turning on data sharing for Dynamics 365 Copilot and Power Platform Copilot AI features. They support data sharing and preservation.

Advantages of opt-in include the ability to make existing features more efficient. The downside, however, is that consumers don't have control over how their data is used, which is an important consideration, privacy professionals say.

The good news is that consumers who have opted in to Microsoft can withdraw their consent at any time. Users can do this by going to the Tenant Settings page under Settings in the Power Platform admin center and turning off data sharing for the Dynamics 365 Copilot and Power Platform Copilot AI feature toggle.

Set a shorter retention period for search's generative AI

Consumers may not think too much before using artificial intelligence to find information, just like using search engines to generate information and ideas. However, even using gen AI to search for certain types of information can violate personal privacy, so there are best practices when using tools for this purpose. If possible, set a shorter retention period for AI tools, says Hoffman-Andrews. If possible, delete the chat once you have the information you need. The company still has the server logs, but it can help reduce the risk of third parties accessing your account, he said. It also reduces the risk of sensitive information becoming part of model training. “It really depends on the privacy settings of the specific website.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here