Apple CEO Tim Cook speaks at the Apple Worldwide Developers Conference (WWDC) at the San Jose Convention Center in San Jose, California, Monday, June 4, 2018.

Josh Edelson | AFP | Getty Images

Large language models like ChatGPT can produce entire blocks of text that read as if they were written by a human. Companies are racing to integrate ChatGPT into their programs, including Microsoft, Tie upand Shopify. But that trend could stop if Apple decides to restrict access to ChatGPT-based apps in its App Store, which is the only way to install software on iPhones.

Blix, an email software maker that regularly clashes with Apple over App Store rules, says it ran into that hurdle this week.

related investing news

CNBC Pro

Co-founder Ben Wollach told the Wall Street Journal that Apple rejected an update to its BlueMail app because it integrated ChatGPT to help compose emails and didn’t include content filtering through the chatbot’s output. Volach also said on Twitter that Apple is “blocking” the AI ​​update.

Apple said that without content filtering, the Blue Mail chatbot could generate words that are not suitable for children, and according to the report, the mail app will have to raise the recommended age to 17 and above.

Apple is investigating, and developers may appeal the decision, a spokesperson told CNBC.

Nevertheless, the Blue Mail episode is not a sign of Apple’s coming crackdown on AI software.

In fact, ChatGPT-based features are already available in Snapchat and Microsoft’s Bing app, which are now rolling out through the App Store. Other AI apps like Lensa have also spread and thrived in the App Store.

There is no official AI or chatbot policy in Apple’s App Store Guidelines, the document that indicates Apple’s approval of the App Store. Apple has people in a department called App Review who download and briefly use all apps and updates before they approve them.

Apple could add AI recommendations in the future. For example, in the 2018 update, Apple introduced a section about cryptocurrency in the guidelines, which allows the use of wallets and prohibits mining on the device. Last year, Apple introduced new rules regarding NFTs. The company frequently issues updates to its guidance in June and October.

But the Blue Mail episode shows that Apple’s App Store is strict about content that is generated on a massive scale – either by users (such as in the case of social media apps) or, more recently, by artificial intelligence.

If an app can display content that infringes intellectual property or messages related to cyberbullying, for example, then the app must have a way to filter that material and a way for users to report it, Apple says.

The content moderation rule was likely the crux of the fight with Elon Musk’s Twitter late last year and the reason Apple pulled Parler from the App Store in 2021. Apple allowed Parler to return to the App Store when it added content moderation.

Before it was released on the iPhone in the Bing app, Bing’s ChatGPT-based AI generated some creepy conversations, including threats to its users and pleas for help.

But Bing has built-in content moderation and filtering tools. Microsoft’s artificial intelligence allows users to reject malicious responses and includes a “security system” that includes content filtering and breach detection. Microsoft has also updated its Bing chatbot in recent weeks to stop these creepy conversations, with the chatbot now often refusing to touch on topics that could cause it to go off the rails.

Source by [author_name]

Previous articleThe future of the clean hydrogen industry depends on the IRA tax credit
Next articleReid Hoffman is stepping down from OpenAI’s board to avoid potential conflicts