As the generative AI race continues to heat up, Google is expressing caution with the release of their ChatGPT rival, warning “things will go wrong”.
Image by Blue Planet Studio via Adobe Stock
Google has recently released its ChatGPT rival, Bard. But they have yet to integrate it into its search engine, unlike Microsoft, which launched a new ChatGPT-fueled Bing and 365 apps. However, Google’s cautiousness may be warranted. According to The Wall Street Journal, media publishers are gearing up for a showdown with Microsoft, Google, and OpenAI over their bots.
It would seem, Facebook is no longer the media’s biggest threat. Now the biggest threat is AI bots. These generative AIs are trained on massive amounts of text data, which includes copyrighted articles from the web.
Recently, OpenAI announced a new feature of ChatGPT: it can now find information on the web, in some cases, prior to 2021. This could prove to be a problem for publishers of online content.
Some publishing executives are worried about their content being used to teach bots like ChatGPT and are considering legal action. The News Media Alliance is leading the effort to be paid for the content that AI systems like ChatGPT uses. Plus, there have been cases where ChatGPT has been found to plagiarize human text with slight modifications.
It seems that the “Fair Use” law allows portions of copyrighted material to be used without permission in certain cases. However, news execs including News Corp. CEO Robert Thomson, believes if their proprietary content is being used then they should be paid for it.
AI is also threatening higher education with educators and schools already scrambling to detect cheating. In addition to plagiarizing and theft of intellectual property, there is also the case of privacy. According to Jeanine Kwong Bickford published on BCG, “more executives are becoming aware of the risks within their organization as employees experiment with generative AI.”
It turns out that moving faster can backfire, as tech titans have taken the approach of “ask for forgiveness, not permission”. Industries haven’t had time to assess all the legal and ethical implications that could arise, such as biases, copyright infringement, and privacy concerns. However, when these issues catch up to innovation, it could lead to a backlog of problems all at once.
How to Put Generative AI to Work Responsibly
There are potential risks to generative artificial intelligence like ChatGPT. However, AI has the potential to dramatically accelerate innovation and completely change our lives by eliminating much of the grunt work. With that said, here are four ways we here at Nicole Williams Collective are putting generative AI to work responsibly.
1. Use it as a starting point. Use generative AI to help brainstorm ideas and find inspiration, but rely on your human skills to bring everything together into a cohesive whole.
2. Use it to automate repetitive tasks. Simplify and streamline processes and tasks that you would otherwise have to perform manually. For instance, it can help you sort your emails, prioritize your to-do list, and send reminders for upcoming deadlines.
3. Use it to conduct research. Thanks to AI’s language processing capabilities, it can quickly understand your queries and find the information you need.
4. Use it to augment your ability to spot trends. AI’s ability to analyze data and provide insight that can help identify patterns, trends, and correlations can aid in making informed decisions.