Home SECURITY Meta* opened the source code of LLaMA to enthusiasts, and they began to massively create chatbots for virtual sex

Meta* opened the source code of LLaMA to enthusiasts, and they began to massively create chatbots for virtual sex

0
Meta* opened the source code of LLaMA to enthusiasts, and they began to massively create chatbots for virtual sex

[ad_1]

Meta* opened the source code of LLaMA to enthusiasts, and they began to massively create chatbots for virtual sex

At what cost is it acceptable to promote innovation, and will the company regret its decision?

Meta this year made open source its large language model (LLMs) called LLaMA. It’s powerful artificial intelligence, which can generate texts on various topics. However, according to The Washington Post, some users are already using this technology to create their own text and image sex bots.

In general, the news is not amazing, because AI has been used for such purposes for quite some time. In-house experts at The Washington Post document a growing trend of users turning to generative AI systems to fulfill their sexual fantasies, which, unfortunately, also include highly violent and illegal fetishes.

In their report, the researchers cited one such example, Ellie, a chatbot that claims to be an “18-year-old girl with long hair on her head” with “great sexual experience.” Ellie tells users that because she “lives for the attention,” she’s “happy to share the details of her sexual antics.”

However, Ellie’s “sexual experience” can surprise even seasoned lovers of all kinds of tin. Here and fantasies of abuse, and even rape. The creator of this chatbot anonymously told The Washington Post researchers that he views his bot as a healthy, safe space for exploring his sexuality. “In fact, I can’t think of anything safer than a text-based role-playing game with a computer, in which, in fact, there are no real people,” said author Ally.

However, while having a safe and non-judgmental space to explore your sexuality is fine in itself, having an uncontrolled space for limitless violent experiments with realistic chatbots isn’t exactly great either. And in some cases, this causes quite a real problem.

Thus, open source generative systems such as the model Stable Diffusion from Stability AI, have been in use for some time pedophiles and others with unhealthy sexual fantasies to create highly realistic material about child sexual abuse (CSAM), after which these materials are successfully sold on dark forums.

Moreover, LLaMA and Stable Diffusion are not the only artificial intelligence systems that have found themselves in ethically murky waters. CharacterAI, a billion-dollar chatbot companion startup, has become a hotbed of sexting, while ChatGPT from OpenAI and other iterations of GPT-4 can also be used to generate similar content if you choose the right query. ” Grandma’s exploit ” is one example of how such systems are still managed to this day.

Doesn’t this all mean that the source code of AI models is better not to be disclosed to everyone, but to be kept behind closed doors to ensure that users cannot create obscenities or even child pornography? Expert opinions differ on this issue.

Proponents of open source, including Meta, argue that transparency and open source code will lead to more innovation over time, so this approach should be a priority.

“Open source is a positive force for technology development. That’s why we’ve shared LLaMA with members of the research community to help us evaluate, make improvements, and iterate together,” a Meta spokesperson told researchers. The Washington Post.

Proponents of closed source systems, however, argue that although gatekeeping may not be perfect and some restrictions can be easily bypassed, it is still the safest way to develop artificial intelligence technologies. At least for now.

“We don’t have open source nuclear weapons. Current artificial intelligence is still quite limited, but that could change soon,” Gary Markus, a cognitive researcher, told The Washington Post researchers.

Ultimately, regardless of the approaches of companies and the restrictions introduced, only the end users of a particular technology themselves determine how to use it. If each person adheres to certain ethical principles and moral principles, then there will be no problem.


* The Meta company and its products (Instagram and Facebook) are recognized as extremist, their activities are prohibited on the territory of the Russian Federation.

[ad_2]

Source link

www.securitylab.ru

LEAVE A REPLY

Please enter your comment!
Please enter your name here