ChatGPT may well be the most famous and potentially valuable algorithm at the moment, but the artificial intelligence techniques that OpenAI uses to ensure its intelligence are neither unique nor secret. Competing projects and open source clones may soon make ChatGPT-style bots available for copying and reuse.

Stability AI, a startup that has already developed advanced open source image generation technology, is working on an open competitor to ChatGPT. “The release is a few months away,” says Emad Mostake, CEO of Stability. A number of competing startups, including Anthropic, Cohere and AI21, are working on proprietary chatbots similar to the OpenAI bot.

The coming influx of sophisticated chatbots will make the technology more widespread and visible to consumers, and more accessible to AI businesses, developers, and researchers. This could accelerate the rush to make money with AI tools that generate images, code and text.

Well-known companies like Microsoft and Slack are incorporating ChatGPT into their products, and many startups are scrambling to create a new ChatGPT API for developers. But the wider availability of the technology may also complicate efforts to predict and mitigate the risks associated with it.

ChatGPT’s admirable ability to provide convincing answers to a wide range of queries also leads him to sometimes fabricate facts or adopt problematic personalities. This can help with malicious tasks such as generating malicious code or spam and disinformation campaigns.

As a result, some researchers have called for a slowdown in the deployment of ChatGPT-like systems while the risks are assessed. “There’s no need to stop research, but we could certainly regulate widespread deployment,” says Gary Marcus, an artificial intelligence expert who has sought to draw attention to risks such as misinformation created by artificial intelligence. “We can, for example, request studies on 100,000 people before we release these technologies to 100 million people.”

The widespread availability of ChatGPT-style systems and the release of open-source versions make it difficult to limit research or wider deployment. And the competition between big and small companies to adopt or match ChatGPT shows little desire to slow down, but instead seems to be driving the technology’s spread.

Last week, LLaMA, an artificial intelligence model developed by Meta similar to the one behind ChatGPT, was leaked online after it was shared with some academic researchers. The system can be used as a building block when creating and releasing a chatbot caused concern among those who fear that artificial intelligence systems, known as large language models, and chatbots built on them, such as ChatGPT, will be used to create disinformation or automate cyber security breaches. Some experts argue that such risks may be exaggerated, while others believe that increasing the transparency of the technology will actually help others protect themselves from abuse.

Meta declined to answer questions about the leak, but company spokeswoman Ashley Gabriel issued a statement saying, “While the model is not available to everyone and some have tried to circumvent the approval process, we believe our current release strategy allows us to balance responsibility. and openness.”

Source by [author_name]

Previous articleMore than 200 people have been treated with the experimental CRISPR therapy
Next articleBiased warnings about AI and experimental CRISPR treatments