Google Cautioning Employees on AI Chatbot Usage as Bard's Global Expansion Continues
Updated: Jan 5
Alphabet Inc advises employees to avoid sharing confidential information with chatbots like Bard and ChatGPT, highlighting potential data leak risks.
Alphabet Inc, the parent company of Google, is warning its employees about the usage of AI chatbots, including its own Bard, while simultaneously promoting the program worldwide, according to sources familiar with the matter. Reuters reports that the company has reiterated its longstanding policy of safeguarding confidential information, advising employees not to input sensitive materials into the chatbot systems. Bard and ChatGPT, which use generative artificial intelligence to engage in human-like conversations, have the capability to replicate absorbed data, making data leaks possible.
Alphabet has also informed its engineers to avoid directly using computer code generated by chatbots. While the company acknowledged that Bard can suggest undesired code, it stated that the program is still beneficial to programmers. Google emphasized its commitment to transparency regarding the limitations of its technology. These precautions illustrate Google's desire to prevent any potential negative consequences resulting from the competition with ChatGPT, supported by OpenAI and Microsoft Corp. This rivalry involves billions of dollars in investments and the untapped advertising and cloud revenue from new AI programs.
Google's cautious approach aligns with the growing security standards adopted by corporations, which involves alerting employees about the risks associated with using publicly-available chat programs. Numerous businesses worldwide, including Samsung, Amazon.com, and Deutsche Bank, have implemented guidelines for AI chatbot usage. Apple, which did not respond to requests for comment, is also reported to have adopted similar measures.
According to a survey conducted by Fishbowl, approximately 43% of professionals were utilizing ChatGPT and other AI tools as of January, often without informing their superiors. Insider reported that in February, Google instructed its staff testing Bard not to share internal information with the chatbot. Bard is now being introduced to over 180 countries and available in 40 languages, serving as a catalyst for creativity. Google's warnings regarding code suggestions also extend to this expansion.
Google has confirmed that it has engaged in detailed discussions with Ireland's Data Protection Commission and is addressing regulators' inquiries. This follows a Politico report stating that the company had postponed Bard's launch in the European Union pending further information regarding the chatbot's impact on privacy.
Concerns regarding sensitive information surround this technology's ability to draft emails, documents, and even software, promising significant task acceleration. However, these outputs may contain misinformation, sensitive data, or even copyrighted content. Google's privacy notice, updated on June 1, explicitly advises users not to include confidential or sensitive information in their conversations with Bard.
Some companies, such as Cloudflare, have developed software solutions to address these concerns. Cloudflare offers businesses the capability to tag and restrict data from being transmitted externally, safeguarding against potential data leaks. Both Google and Microsoft are offering conversational tools to enterprise customers, which come with higher price tags but ensure that data is not absorbed into public AI models. By default, Bard and ChatGPT save users' conversation history, although users can choose to delete it.
Microsoft's consumer chief marketing officer, Yusuf Mehdi, stated that it is logical for companies to discourage the use of public chatbots for work. He explained that Microsoft's free Bing chatbot has less lenient policies compared to its enterprise software, indicating a more stringent approach in place. While Microsoft did not comment on whether they have a blanket ban on entering confidential information into public AI programs, another executive revealed personal restrictions on usage.
Cloudflare CEO Matthew Prince compared typing confidential matters into chatbots to "turning a bunch of PhD students loose in all of your private records," emphasizing the risks associated with sharing sensitive information.
Alphabet Inc advises employees against entering confidential information into AI chatbots like Bard and ChatGPT due to potential data leak risks.
Google cautions engineers to avoid direct usage of computer code generated by chatbots.
Google emphasizes transparency and aims to prevent business harm from competing with ChatGPT.
Corporations worldwide, including Samsung, Amazon.com, and Deutsche Bank, implement guidelines on AI chatbot usage.
Survey shows 43% of professionals use AI tools like ChatGPT without informing superiors.
Google discusses privacy concerns with regulators amid Bard's global expansion.
Cloudflare offers solutions to tag and restrict data flow in response to security concerns.
Google and Microsoft provide conversational tools to enterprise customers, ensuring data protection.
Microsoft takes a conservative standpoint with stricter policies for enterprise software.
Cloudflare CEO warns against sharing confidential information with chatbots.