[ad_1]
Agencies and brands, driven by strategic business decisions to adopt generative artificial intelligence, are increasingly using small-language models for more task-driven solutions.
“As we work with clients, we plan to use [SLMs] because the data set [to train] is smaller, and its tasks are defined to a particular brand’s needs,” said Michael Olaye, senior vice president and managing director of strategy and innovation at R/GA, which began testing SLMs in early January.
Interest in SLMs bubbled up last November when Microsoft announced the launch of its own SLM, Phi-2. In its latest earnings call, Microsoft revealed that its customers including Anker, Ashley, AT&T, EY and Thomson Reuters are exploring Phi for their AI applications.
The rise of SMLs indicates a shift from costly and resource-intensive large language models toward more efficient and adaptable alternatives, making it easier for agencies and brands to accomplish task-driven initiatives.
“The key takeaway for advertisers in 2024 is to be aware of [SLMs] as a developing gen AI area,” said Cristina Lawrence, executive vp of consumer and content experience at Razorfish. “If discoveries are made that reveal valuable use cases, they could enhance efficiency and reduce cost.”
Here’s what you need to know about SLMs.
What are SLMs?
SMLs are slimmed-down versions of LLMs that are easier to train on narrower data sets, reduce inappropriate responses and deliver more relevant outputs, all at lower cost.
“An LLM is trained on an expansive, broad set of publicly available data covering massive amounts of information,” said Lawrence. “But specializing an AI model in brand knowledge, or instructional data sets, can make the models more focused and deliver a more targeted user experience. It can also be costly to train an LLM with the processing power required, but when you tighten the scope of data, it becomes more accessible for companies to experiment with.”