Use open source for safer generative AI experiments

The public availability of generative AI models, particularly large language models (LLMs), has led many employees to experiment with new use cases, but it also put some organizational data at risk in the process. The authors explain how the burgeoning open-source AI movement is providing alternativ...

Full description

Bibliographic Details
Other Authors: Culotta, Aron, author (author), Mattei, Nicholas, author
Format: eBook
Language:Inglés
Published: [Cambridge, Massachusetts] : MIT Sloan Management Review 2023.
Edition:[First edition]
Subjects:
See on Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009825873306719
Description
Summary:The public availability of generative AI models, particularly large language models (LLMs), has led many employees to experiment with new use cases, but it also put some organizational data at risk in the process. The authors explain how the burgeoning open-source AI movement is providing alternatives for companies that want to pursue applications of LLMs but maintain control of their data assets. They also suggest resources for managers developing guardrails for safe and responsible AI development.
Item Description:Reprint #65221.
Physical Description:1 online resource (5 pages)