Article

Five Back-to-the-Future Predictions for AI in 2025

First published on The Fast Mode.

In the 1980s sci-fi/comedy movie hit “Back to the Future,” a teenager accidentally travels back in time and prevents his parents from falling in love, thus dooming his own existence unless he can set things right and get back to the present. In the world of AI, many are concerned that while our existence may not be threatened, we are charging into the future at a speed that is outpacing our ability to properly address fundamental AI issues like security, governance, resource consumption and social impacts. Fortunately, there are some emerging trends that tap into past wisdom and best practices in ways that can help us better navigate the current challenges and future opportunities of AI in enterprise and telecommunications connectivity and security.

These five trends are:

  1. Increased adoption of small language models,
  2. Pivot from model thinking to systems thinking,
  3. Sharper focus on AI agent security,
  4. Greater recognition of humans as ‘differentiators,’ and
  5. Renewed emphasis on data quality.

 

Trend 1

Large Scale Adoption of Small Scale Models

Large language models (LLMs) like GPT-3/-4, Gemini, Llama, Claude, Mixtral, etc. have enabled astonishing breakthroughs in generative AI. While each has unique strengths with certain types of output (e.g., text, images, audio, video, or code), they share a common weakness: a ravenous appetite for compute power, memory, training data, and energy.

Another challenge LLMs share is their generalized nature. While this a key LLM strength, it also makes adapting them to specific needs challenging. Developers often use extensive manual adjustments to prompts to adapt an LLM to their requirements, which can lead to deep, even total, model dependence for AI applications. Other methods used include fine-tuning results using human feedback loops, and improving relevance and accuracy through run-time queries to authoritative external datasets (i.e., Retrieval-Augmented Generation, or RAG).

Another strategy altogether is to start with a small language model (SLM) instead of an LLM, even if one still applies fine-tuning and RAG. The SLMs currently being released by both start-ups and the LLM titans are producing comparable results to LLMs while being trained using far fewer parameters and datasets that are much smaller but of high quality. [1]

This makes it easier to adapt SLMs to specific objectives and constraints, and it reduces (although does not eliminate) risks like model dependence and hallucinations. Importantly, it also means they have much more modest resource needs: they can function on standard CPUs, have much lower memory requirements, and consume far less energy.

Accordingly, we are likely to see much wider adoption of SLMs in 2025. And, in some of the methods used to improve results with both SLMs and LLMs – namely fine-tuning and RAG – we can see the contours of another 2025 trend: a shift from a model-centric approach to a more systems-centered approach in the development of language model-based AI applications.

 

Trend 2

A Shift from Model Thinking to Systems Thinking

In early 2024, some were already noting this shift, like the Berkeley Artificial Intelligence Research (BAIR) group. They observed that while it was understandable that the enthusiasm unleashed by GPT-3 and ChatGPT inspired developers to place LLMs at the center of AI application development, the focus had (rightly) begun to shift to the state-of-the-art results “increasingly obtained by compound systems with multi-components, not just monolithic models,” [2] which one BAIR member later stated is an approach “that can produce better results in just about every way they can be measured.” [3]

Others seemed to remain more firmly ensconced in a model-centered mindset in 2024, but sensed change was coming. For instance, at OpenAI DevDay in November 2024, when OpenAI CEO Sam Altman was asked about considerations for choosing one LLM over another for an AI application, he responded “Developers use multiple models most of the time and I’m not sure how that’s all going to evolve as we head towards this more agentified world, but… something about the way that we currently talk about it or think about [AI] feels wrong. Maybe, maybe if I had to describe it, we will shift from talking about models to talking about systems.” [4]

A systems approach to LLM-based application development employs the same basic elements systems development has always used, like modular design, generic architectures, and data-driven optimization. And it provides a valuable layer of abstraction by wrapping language model programming frameworks around prompt engineering. [5]

With the use of AI agents expected to surge in 2025 [6], this more formal systems approach offers a timely advantage in terms of security and governance as well, and it has a strong historical foundation for developers to draw on.

 

Trend 3

A Heightened Focus on AI Agent Security

Broadly speaking, AI agents are “artificial entities that sense their environment, make decisions, and take actions.” [7] When people today talk about AI agents, they are usually talking about LLM-based AI agents, but general AI agent design and development stretches back to the 1950s in the AI subfield of ‘planning AI’ (or ‘AI planning’), and the 1980s in the subfield of Distributed Artificial Intelligence (DAI) and related subdiscipline of Multiagent Systems (all generally intended to support industrial automation). [8]

As noted in the research paper “Security of AI Agents,” securing LLM agents is more complex than securing non-LLM agents, which operate on pre-defined rules or reinforcement learning models, which makes them harder to generalize to new tasks and different tools, but easier to control and secure. [9] Unfortunately, the researchers found that the potential side effects and dangers of the LLM agents studied were not addressed by the frameworks used to build them, nor by research aimed at improving them.

This finding is echoed in “AI Agents Under Threat,” which provides a comprehensive view of LLM agent threats across 100+ research papers. [10] The study concludes the security challenges associated with AI agents remain largely under-explored and unresolved.

It is therefore logical to expect that as agent use proliferates, so will agent-related security incidents, and activity in AI Agent security will intensify. And today, a debate is already taking shape as to whether existing cybersecurity tools (e.g., identity and access management (IAM), zero trust, anomaly detection, observability, etc.) can be adapted to incorporate agent security, or whether new, specialized tools and techniques will be required. [11]

Beyond security, another source of serious concern is whether AI agents will augment or replace workers, and we can expect this issue to receive more attention too in 2025.

 

Trend 4

Humans as ‘Differentiators’ in an Agentic World

In 2020, the World Economic Forum (WEF) forecast workforce automation would displace 85 million jobs by 2025, counterbalanced by a ‘robot revolution’ which would create 97 million new jobs. [12] More recent studies are less clear in forecasting a net job loss/gain ratio for AI, though they are fairly prescriptive about which roles will be most affected. [13]

Whatever the balance will ultimately be, we can expect to hear more in 2025 about initiatives to aid workers in adapting to the agentic AI era. And, if current trends we are seeing at Enea are an indication, we may even see human workers increasingly positioned as a key business differentiator.

For instance, customers place a high value on Enea’s use of AI to address challenges like optimizing message delivery costs through categorization (e.g., 2FA versus promotional messages), blocking malicious links embedded in messages in real-time, and enabling regulatory compliance by performing AI-based restricted image detection in A2P messages. At the same time, there has been an uptick in the number of customers who tell us that a key factor in choosing Enea is the global team of deeply experienced human experts in our signaling and messaging security threat intelligence units (TIUs).

Similarly, in our embedded security division, we work with the most innovative enterprise cybersecurity vendors in the business, and to a one, they are achieving superb results by pairing their multi-AI agent systems with external, human-curated threat intelligence services and expert in-house security analysts.

In any case, we humans have an advantage vis-à-vis our AI counterparts (at least for now) in being able to generalize far beyond our direct experiences, and benefiting richly from physical and emotional inputs in learning and reasoning. We can also profit handsomely from serendipity, and even humor, in approaching certain analytical and creative tasks, and can forge the kind of human-to-human bonds of trust, loyalty and goodwill upon which business relationships are built.

So in 2025, don’t be surprised if some current conversations about how AI can ‘augment’ humans is flipped to how humans can improve and extend AI to create a true competitive advantage.

 

Trend 5

The (Re)Coronation of Data Quality as King

Another factor likely to differentiate products and services in 2025 is data quality. While the adage “garbage in, garbage out” has served as a golden rule in computer science since the 1950s, it is more relevant than ever in our contemporary AI environment.

The GIGO rule posits that programming results are inextricably dependent on the quality of the data used to build and operate them.In pre-LLM era machine learning, it was well understood that data quality had such an important effect on the quality of ML results that enhancing input data quality alone could dramatically improve a model’s output, without any algorithmic changes. [14]

LLMs, however, loosened the GIGO rule slightly. Early LLMs were able to achieve leaps in qualitative performance largely due to brute computing power and staggering volumes of training data – namely the entire corpus of the Internet. Some worried progress would taper off as the available stock of untapped high-quality data (i.e., human-generated content) on the Internet leveled off.

However, returning to Trend 1, the training of smaller models with (relatively) small but high-quality training data is yielding comparable, and sometimes better, results than LLM-based applications. And so it seems Amir Khosrowshahi (then Intel AI CTO) is getting an answer to the question he posed in 2018: “Do you really need to see 100 million cats to learn how to identify cats?” [15]

This general quality-over-quantity direction also explains the increase in demand we are seeing for our deep packet inspection software library. While the types and number of sources producing network-traffic related data have exploded, much of this data is of medium to low quality and vulnerable to tampering. [16] Hence the gold standard of deep packet inspection is proving to be a pivotal tool for developing successful AI applications, and we expect this year’s double-digit growth to continue in 2025.

And thus,tapping into back-to-the-future traditions like using high-quality data, following best practices in systems design and information security, and nurturing your team’s human advantage, will all have a hand in shaping what is sure to be a successful year ahead for AI.

Endnotes

1. See, for instance, data points here https://tomtunguz.com/small-but-mighty-ai/ (noting the 30x cost differential between the largest and smallest models). For data on a representative SLM from a large vendor (Phi-4 from Microsoft), see: https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090

2. “The Shift from Models to Compound AI Systems.” https://bair.berkeley.edu/blog/2024/02/18/compound-ai-systems/ Berkeley Artificial Intelligence.

3. Christopher Potts, Stanford Webinar “Large Language Models Get the Hype, but Compound Systems Are the Future of AI,” https://www.youtube.com/watch?v=vRTcE19M-KE

4. Sam Altman OpenAI DevDay fireside chat: https://www.youtube.com/watch?v=-cq3O4t0qQc

5. Short survey of the LLM application framework landscape: https://www.infoworld.com/article/3617664/surveying-the-llm-application-framework-landscape.html

6. For example, Gartner has named Agentic AI as its Top Strategic Technology Trend for 2025 (https://www.gartner.com/en/articles/top-technology-trends-2025), which is an expectation shared by tech titans like Google, Amazon, Meta, OpenAI, Anthropic, Salesforce, Microsoft and Nvidia, who closed out 2024 with a flurry of major Agentic AI announcements.

7. “The Rise and Potential of Large Language Model Based Agents: A Survey,” https://arxiv.org/pdf/2309.07864

8. “Multiagent Systems: A Survey from a Machine Learning Perspective,” https://www.cs.utexas.edu/~pstone/Papers/bib2html-links/MASsurvey.pdf

9. “Security of AI Agents,” https://arxiv.org/pdf/2406.08689

10. AI Agents Under Threat: A Survey of Key Security Challenges and Future Pathways https://arxiv.org/pdf/2406.02630

11. For an example of adapting existing approach, here API security, to AI agent system security, see https://www.databricks.com/blog/ai-gateways-secure-ai-agent-systems. It is also interesting to note that the often cited first elaboration of the “zero-trust” concept was a dissertation on AI planning agent collaboration: Stephen Paul Marsh, “Formalising Trust as a Computational Concept,” https://dspace.stir.ac.uk/bitstream/1893/2010/1/Formalising%20trust%20as%20a%20computational%20concept.pdf

12. https://www.weforum.org/press/2020/10/recession-and-automation-changes-our-future-of-work-but-there-are-jobs-coming-report-says-52c5162fce/

13. See for example, https://www.mckinsey.com/mgi/our-research/a-new-future-of-work-the-race-to-deploy-ai-and-raise-skills-in-europe-and-beyond

14. https://www.enea.com/insights/rich-network-traffic-data-for-machine-learning-ml-artificial-intelligence-ai/

15. https://venturebeat.com/ai/intel-ai-boss-its-time-to-move-from-brute-force-to-more-efficient-computing/

16. See the “2024 State of Network Threat Detection” survey report at https://www.enea.com/insights/state-of-network-threat-detection/ on demand webinar at https://www.youtube.com/watch?v=Y-Ds54lihYM

Related insights

Custocy Teams Up with ENEA to Boost AI-Based Network Threat Detection

Five Back-to-the-Future Predictions for AI in 2025

Read more

Tags: AI, Cybersecurity, Deep Packet Inspection, Machine Learning, Observability

State of Network Threat Detection Report

On-Demand Webinar: 2024 State of Network Threat Detection: Enabling Security to Keep Up with Business & Network Transformations

Read more

Tags: AI, Cloud Security, Cybersecurity, Deep Packet Inspection, Intrusion Detection, Security, Threat Detection

Why Custocy Chose Enea's DPI & Threat Detection Technology for its AI NDR Solution

Why Custocy Chose Enea’s DPI & Threat Detection Technology for its AI NDR Solution

Read more

Tags: AI, Cybersecurity, IDS, Intrusion Detection, NDR, NIDS, Threat Detection

State of Network Threat Detection Report

2024 State of Network Threat Detection Report

Read more

Tags: AI, Cybersecurity, Deep Packet Inspection, IDS, Intrusion Detection, NDR, Network Security, NIDS, Threat Detection

Custocy Presents Their Pioneering AI NDR & How Enea’s Next-Gen DPI Boosts Their Solution Performance

Custocy Presents Their Pioneering AI NDR & How Enea’s Next-Gen DPI Boosts Their Solution Performance

Read more

Tags: AI, Cybersecurity, IDS, Intrusion Detection, NDR, NIDS, Threat Detection