Andover Intel https://andoverintel.com All the facts, Always True Thu, 26 Mar 2026 11:29:40 +0000 en-US hourly 1 244390735 Inference: Can AI Achieve it and Scale it Down? https://andoverintel.com/2026/03/26/inference-can-ai-achieve-it-and-scale-it-down/ Thu, 26 Mar 2026 11:29:40 +0000 https://andoverintel.com/?p=6344 Nvidia’s CEO is talking about “inference” in AI, about making AI really able to think in a way at least similar to the way people do. Is this just another attempt to sustain the hype wave, or is it a realistic and important shift in the way that the AI giants and pundits are considering how “artificial” intelligence (AGI) differs from the real thing? Nvidia CEO Jensen Huang says AGI has already been achieved, but whether that’s true or not is hard to assess, and it’s even harder to say if it really matters.

Enterprises have told me many times that they’re not looking for AI to become human in order to make a business case. AGI, to them, means that AI can reason things out rather than be told everything. It’s the difference between being able to, for example, write code to fulfill a business need, versus translate a specific approach into programmatic steps. The core of this, many say, is the notion of “inference”.

Inference is the application of prior knowledge and experience to forecast the way something works or could be made to work. In AI, the notion is that you’d have a foundation model that has been trained on something general, you’d give it a specific situation to analyze (likely by having it “read” a digital twin or analyze sensors or video), and then ask it to pitch in and do the right thing to answer a question or solve a problem. Enterprises think that’s critical in building AI agents, and it’s also what enterprises mean when they think of “autonomous AI”. They aren’t seeing AI systems running around doing stuff without human supervision, but rather doing a contained set of things within boundaries set by the application, and by the workers who built it.

Current AI is all about training, which means that what it can do is limited to what has been at least discussed, if not done, already. We do have some applications, in the image analysis space in the health-care vertical in particular, where some would argue that we’re already applying inference, but physicians tell me that these are still about training in the old traditional sense. They say that what reading a radiographic image is really simply pattern-matching. One pointed out that the transitive property in math (things equal to the same thing are equal to each other) could be considered a basic form of inference, but for most it’s too basic to be truly a deduction, an inference.

If we could create an AI system that was capable of true inference, it would be able to serve as an expert within it range of knowledge, just like a human could. This says nothing about whether it was conscious and self-aware, nothing about whether the same system could be an expert in other areas, or how that might come about. None of that is critical to enterprises at this point. What is critical is a way of approaching their notion of AI agent value, and they think that’s a form of inference.

Many tech types will cite examples of agent value from the IT and network operations space they’re most familiar with. How could an AI agent manage a network or data center? They see it as a process of observation and inference, a combination perhaps of machine learning and AI. They need to be able to translate this vision into trials, and to do that they need confidence at both the practitioners’ level and the approvals level.

One of the challenges of getting all this confidence is the fact that the AI stories almost exclusively favor the cloud-chatbot model of AI that anyone can use, and to a degree use free. Enterprises have consistently told me that “acclamation” is important in justification; if you can cite a bunch of stories on something, it’s easier to get buy-in for it. Of course, citing real successes from other familiar enterprises would be better, but lacking that good ink will serve well. There aren’t many such stories out there. Even with somebody like Nvidia, at one of their events, cites things that relate to self-hosted AI, it generates only a little buzz, and often only in association with things like robots.

There are two reasons for this. One is that those “practitioners” and “approvers” make up a very small audience, and publications’ revenue depends on clicks. There could be millions of ardent technophiles out there ready to read a story about AI running a factory full of robots, but (as one CIO tells me repeatedly) there are only 500 CIOs in Fortune-500 enterprises. The other reason is that it’s a lot harder to write a story about useful, real, applications of AI to things like operations than to spin a robot yarn, where you’ve got a couple of generations who’ve read Isaac Isimov to populate your prospective audience.

All of this is hiding potential answers to the real question about inference, which the resources needed to support it. If we can make a giant AI data center eating a gigawatt inference-capable, there aren’t many agent applications that we could address with it. Small language models are limited versions of LLMs; would a small inference model even be possible, and how small might it be? It would sure be nice to know that.

It would also be nice to know whether limited inference could be trained into a foundation model, one that could then be used to act on smaller batches of local data. The point is that in real-world missions, the places where inference is likely the most valuable are local to the processes, which limits their physical size and power requirements, and even raises the possibility that they’d have to be portable/mobile.

The AI giants would propose an alternative, which is that inference running in their giant data centers and connected via new high-speed, low-latency links, would serve well, without as much of those annoying small-inference-model questions. Yes, if we could deploy the links and if no local solution were possible. It is likely that early inference missions would be served that way, because it’s likely that just as we’ve seen with things like chatbots and LLMs, improvements in technology will gradually let us shrink the inference engines to something more broadly useful. If, of course, we can get someone to work on something as boring as that, given the hype-ridden state of AI overall. Whether we’ve achieved AGI is way less important than whether we can scale inference down enough to leverage it in real-time missions.

]]>
6344
Has Appmod Come to OSS/BSS via AI? https://andoverintel.com/2026/03/25/has-appmod-come-to-oss-bss-via-ai/ Wed, 25 Mar 2026 11:42:25 +0000 https://andoverintel.com/?p=6342 If you were a passenger on the Titanic, you might have had (for a time) a great interest in stories about lifeboats and rescues at sea. If you’re a telco, you might have had similar interest in stories about “opportunities”. Stories wouldn’t have helped those passengers so long ago, and they won’t likely help telcos either, but we still have to examine them for the latter group, at least. So, we come to a Fierce Network piece that quotes the head of Amazon’s telco unit, Ian Hofmeyr. “Telcos are newly excited about AI, especially for speeding up modernization,” it says. If they are, should they be?

The main focus of the piece is “modernization” of current software, something that’s talked about by every single vertical and every single enterprise I’ve chatted with over decades. With telcos, it focuses on OSS/BSS systems, which is their core business application set. Telcos are in fact talking about OSS/BSS modernization, but they’ve been doing that for decades, too. I was in a meeting with the key tech planners of a big US Tier One about fifteen years ago, and the guy on one side of me at the table was in favor of modernizing OSS/BSS, the one on the other side of totally scrapping them. Neither got done, so why now? That’s the question we need to be looking at here.

The biggest driver of application modernization (“appmod”) is cloud computing. While very few enterprises have moved everything, or even most things, to the cloud, nearly all have adopted the cloud as a way of applying elastic resources to an essentially elastic problem, which is how to support customers, prospects, and partners with access to some core business data when this usage is highly variable. This front-end stuff is what’s changed in the OSS/BSS game, versus the past modernization goals.

Obviously, an AWS telecom guy would be focused on AWS, meaning on cloud hosting of something. What I think has developed recently in the telco world is a mirror of what happened in the enterprise IT space over the last couple decades. The transition of the relationship between core applications and users changed because of the Internet. If you’re going to reach out to customers directly from your software rather than through an employee agent, you need to create a customer-friendly portal, one that reflects the geographic and technology-experience breadth of that space. The variability in demand makes self-hosting this portal inefficient, so cloud services make sense. Telcos are finally getting that, so they’re reflecting the cloud-front-end-think in their OSS/BSS planning.

I have heard about just under two-dozen telcos who are somewhere on the journey this new recognition implies. None of them see this as a journey to AI, and only about a third see AI as more than a potential tool in migration. The reason is complicated, but interesting.

Most enterprises have built core applications themselves over the years, and so they have to consider the question of how they adopt front-end portal technology in the cloud when their core applications are old and monolithic. Some enterprises have indeed looked at broadly modernizing the stuff—the whole “appmod” trend of the 2010-19 decade is a good example. Most have elected to simply create an interface into/with core applications rather than try to do any sort of massive transformation.

Telcos are both different, and the same. Most telco OSS/BSS systems are third-party software with some layers and shims to customize their behavior to each telco’s operating requirements and regulatory frameworks. They are, to a degree, dependent on the progress of their OSS/BSS vendor in prepping for cloud portals. In addition, telcos are service companies, and a service company is different because what their core software is doing is managing an ongoing relationship set, not a series of independent sales. It’s more challenging to transform service management than sales.

In any event, modernization of any sort runs into a problem I’ve seen for decades, best explained by a comment I had from a CIO. “Conversion projects are the worst projects you can propose. They’re all cost and no benefit. The best you can hope for is that nobody knows you ever did anything.” The fact is that the business case for OSS/BSS transformations has always been hard to make, as THIS piece shows well.

The relationship between AT&T and AWS, cited in the first of my references, shows that a hybrid-cloud model, with premises-hosted but cloud-compatible elements, is a good way to create a front-end portal. AWS Transform, a generalized tool to manage appmod projects, is a help in such tasks, according to both enterprises and telcos, but neither group has told me they expect it to help them make a business case.

Another point is that, so far, no telco has indicated that OSS/BSS modernization has a significant impact on the bottom line. That doesn’t mean that there’s no value there, just that it’s a bit of an ordeal to get much backing for the project. The piece from Passionate About OSS/BSS asks “Why are transformation approvals (eg business case approvals, vendor selections, project transformation decisions) forced to look perfect when delivery is anything but?” Sadly, business cases always have to look as good as possible or they won’t get approved, and most of them suffer from execution deterioration. But the same piece notes that OSS/BSS transformation projects often fail “because stakeholders expect certainty and perfection, so transformation leaders feel forced to model uncertainty as certainty.” That goes back to my old CIO quote.

Which, to me, means that Hofmeyr saying “I’m starting to see leadership coming into telco that have absolutely zero tolerance for that and are driving the right outcomes…I really feel there’s a shift,” may reflect more wishful thinking than realism. I don’t think that telcos really see AI as an opportunity, not to use it or to sell it, if one defines “opportunity” as meaning “something that can make a business case.” I do think that most are hopeful on both counts, for the same reason that those Titanic passengers were hopeful; the alternative to hope was despair. Something needs to be done in the world of telcos, on the cost side and on the revenue side. Could AI play a role? I think so, but then telcos have refused lifelines thrown to them in the past. Will this one be handled differently? I wonder.

]]>
6342
How Cisco Sees the State of Industrial AI https://andoverintel.com/2026/03/24/how-cisco-sees-the-state-of-industrial-ai/ Tue, 24 Mar 2026 11:36:03 +0000 https://andoverintel.com/?p=6339 There’s no shortage of AI survey reports these days, yet they keep coming. You decide whether that’s just eagerness to promote the current hype wave or actual importance. Not to mention, we all need to decide whether the data being offered is actually valid. Cisco just released “Cisco State of Industrial AI Report”, and we’ll get to that in a minute. Before we do I want to point out some of the barriers we face in assessing any such document.

A survey of any kind can be valid if the right people are asked the right questions, they understand the questions, and the analysis of the results isn’t biased in any way. There’s no way for me to judge Cisco’s bias here, so we have to look at the other points. Who are “the right people” and what are “the right questions”, then can the former understand the latter.

Almost everyone has an opinion on AI. Many, including almost anyone in tech, have at least used AI. However, most of these are not the “right people” for any survey of AI because they are casual users of a free-service AI model. In Cisco’s case in particular, they have nothing whatsoever to do with “industrial AI”, and probably don’t know what it is. The first Cisco-specific question is whether the people who contributed are among the few qualified.

Cisco’s introduction says “We spoke to decision-makers at firms in 19 countries, operating in 21 industrial sectors including manufacturing, utilities and transportation.” How many? I don’t know. What kind of decisions are they making, meaning do they have any direct understanding of what Cisco is calling “industrial AI”, and is their understanding consistent across the base, and with Cisco’s use of the term in analyzing their responses? I don’t know that either.

Based on this so far, I’d be justified in saying that this report’s value could not be assessed at all, and so my continuing it is a waste of my time, and reading it a waste of yours. OK, feel free to act on that. However, I do have enterprise views on AI to draw on, and I can compare the report to what those views reveal, so let’s do that and simply point out the potential reasons for difference where there is one.

I’m going to draw from a group of 181 enterprises who offered me comments on real-time edge computing applications, because 1) “Industrial AI” would seem to necessarily focus on direct process control and 2) process control missions already involve premises-hosted edge computing. Within that group is a smaller group of 48 who said that they believed it likely that their real-time applications would involve edge hosting away from the process points, which means that something other than local-area telemetry would be involved. However, I have over 500 comments on AI that I can reference if we need to look at broader attitudes. Finally, the report is long, so to do my analysis in a blog of reasonable length I need to focus on the executive summary, with some comment on the broad insights of the rest. OK? Let’s go.

Cisco’s Executive Summary starts with “Industrial AI demands network modernization”, and says that 51% of their survey base expected that AI implementation would require “significant increases in connectivity and reliability requirements”. In my 181-real-time group, this view was held by only 27%, and in the larger 500-plus group by 39%. It also says that 96% of those responding think that wireless networks are vital, which only around 5% of my groups of 500 and 181, and a bit over ten percent of my group of 48 believed. However, the Cisco comment that 44% of their base said greater edge compute capacity was needed and 42% that greater bandwidth was needed seems consistent with what I hear from the 181 group, but the broader group would have little or no qualification to offer a view on this.

The comments on security are interesting; Cisco finds 40% saying cybersecurity was a top obstacle to AI adoption. In my 500 group, three-quarters say that, but in the 181 group only about five percent did. The reason is that to both of my groups, the presumption is that AI would have to process business-critical data, subject to governance. It’s not clear that simple process telemetry data would have that requirement. The last security point, which is that 85% expect AI to improve their security posture, doesn’t align at all with what I hear from any of my groups. It also makes me wonder whether Cisco is asking people knowledgeable about industrial AI missions, because nobody I chat with sees AI’s application to industrial automation playing a security role at all.

Next, we have to look at the views of enterprises on just how real-time industrial computing, AI or otherwise, would actually develop. What they tell me is that they’d evolve out of the expansion of current missions of “local edge” computing, shifting in some cases to self-hosting of expanding real-time applications where multiple sites in the same metro permitted, and finally to edge services. Recall that only about a quarter of the 181 group had experienced any of this, and they were confined to a few verticals. In addition, the applications were not said to have an AI component.

The final point in the executive summary is that IT/OT cooperation is critical to “AI at scale”. They say 43% operate today with limited or no such cooperation, and 90% claim wireless instability with siloed IT/OT teams but only 61% with collaboration. They also say that without IT/OT collaboration, only 72% are confident of scaling AI, while with it 83% are confident. Here, “OT” means “Operations Technology”, meaning the process control elements themselves (the report doesn’t seem to define this, and I wonder if the people surveyed interpreted it correctly).

There’s some interesting data in this report, in the details that follow the executive summary, but also some that are troubling given the industrial-AI focus. For example, on page 13 the report shows that 22% of those who responded said they were already seeing favorable outcomes for AI, which my contacts indicate cannot be the case given that industrial AI adoption is in single digits.

What I think is true here is that Cisco is looking at an at-some-future-point vision of real-time process control, one in which wireless has a much greater role than it does today (today, most process/OT elements are hard-wired, and there is no significant view that this would change except to accommodate control of moving devices. Further, they are getting comments that conflate the evolution of real-time process control and the evolution of AI as a part of that. There is a potential role for AI in the future of real-time process control, both as a means of building the digital twins or world models needed, and as an element of such models. However, I can’t find a significant number of enterprises who have confidently-held views on either.

I wish the data gathering here was better focused; I think there could be a lot of value. However, it seems clear to me based on what I hear that the survey wasn’t able to maintain the tight focus on qualified targets and on-topic responses that was needed. AI is a technology; industrial AI is a mission, and if you’re reporting on the latter you keep that in mind.

]]>
6339
What Enterprises Say About Traffic in Distributed AI https://andoverintel.com/2026/03/19/what-enterprises-say-about-traffic-in-distributed-ai/ Thu, 19 Mar 2026 11:42:28 +0000 https://andoverintel.com/?p=6337 Enterprises have said all along that AI network traffic that mattered would not come from carrying queries and replies, but from any information flow within the model, or data flow to the model from databases used in analysis or training. They’ve also said, more recently, that the expected any meaningful AI application to be self-hosted, and that some at least would involve distributed models linked in some way. We know that the former data flows would almost surely be within a data center, but what about the latter ones? We have limited enterprise comment on this (27 that were clearly based on first-hand knowledge and another 20 that seemed to be from qualified people), but it’s possible to see some interesting patterns from this.

The notion that AI might be distributed, with models/elements deployed in multiple points but engaged in a cooperative mission, arises from four factors. First, some missions for AI demand that some logic be local to a point of activity. You can’t have self-driving vehicles that depend on remote AI for things like collision avoidance. Second, some missions involve a larger data store for some functionality, and thus should logically be local to the data storage point. Third, simply looking at latency, it may be possible to group the functionality by latency requirements, then host each group at an optimum point for hosting economy of scale. Finally, issues of data governance or privacy may demand that some AI missions that could be partially cloud-hosted need to have governed components pulled out and placed under local control to meet compliance requirements.

When looking at these applications overall, enterprises have tended to look at “event flows”, which suggests that most of the distributed AI is expected to be used to handle real-time systems. There are a few examples that fit primarily into the forth factor of the last paragraph, but both types of distributed AI envision two models linked in a work or event flow, and in fact some enterprises note that it’s useful to think about the network needs of distributed AI by thinking of the AI elements simply as application components.

The enterprises dismiss the notion that a local/personal AI element would draw significant data from another location, noting that this relationship violated the second factor noted above. They also point out that if significant data is needed, then it is very likely that the result is not needed immediately because the data could not be analyzed in a short time. That would mean that there was no latency or reliability reason why any portion of the application needed to be co-located with, or carried by, the user.

The implication of this is that there seem to be two models of distributed AI, each with its own rules for traffic generation. In one model, an event-flow model, the presumption enterprises make (and some say they’ve already experienced) is that a “local” model handles some events that are time-critical, and requests help from a deeper model for some other events. This help is returned in the form of an answer or data than may then be further used by the local model. The deeper AI is, in nearly all cases, digesting information rather than forwarding everything, so there isn’t an expectation of (or experience with) a lot of traffic.

In the second model, which is more like a transactional model, the local model passes off something for deeper processing. It may be that this something is a transaction being passed to a normal software application, or it may be something that goes to AI. A confirmation is returned in either case, perhaps, and there may be some data associated with it, but no more than would accompany a software application’s handling of a transaction.

What about the notion of “token services”, some sort of low-latency service aimed at transporting AI model tokens? Enterprise types who are involved in budgeting for network services think this is, to quote one, “outlandish”. In AI, a token is a unit of data within an AI model, which means that unless the model hosting elements are distributed geographically, there’s no WAN service involved. Enterprises see no reason to distribute a model. They also don’t see a reason to have a model run somewhere distant from the model’s data sources. The theory seems linked to the idea that enterprises would use cloud-hosting of AI with their own data, which none say is a practical notion for performance, reliability, cost, and governance reasons. In any event, if they did need to push tokens over a WAN, they would not do so with a usage-priced service, especially since they have no control over how many tokens a model might want to send.

The notion of premium AI handling for pay is especially implausible for mobile services, they say. There is a value in mobile connections for IoT, when/if it expands offsite, but this is because of IoT application latency constraints and not AI, and it’s probably related to edge computing services. However, it’s interesting that some enterprises are thinking about supporting at least what might be called “metro-mobile IoT” by hauling the events to their own data center. Their theory is that if there are low-latency mobile services available, they’d be as good connecting to their data center as they would be to a metro hosting point for edge computing.

Why, given all of this, are we hearing so much about AI-specific services? I think the answer is that a lot of people, vendors, and telcos are trying to validate opportunities for their own products/services/interests by linking things to the current dominant hype wave, which is AI. AI is a way of creating functionality, applications or components thereof. It’s the mission that sets the connectivity requirements and not the technology used to meet it. If AI demanded a whole new set of premium services, enterprises would find it even harder to make a business case for it. Why would you toss application software that worked fine with inexpensive best-efforts connectivity, and toss all your hosting resources for it, to embrace something that would raise your communications and hosting costs?

We’ve all heard that telcos had a “field-of-dreams, build-it-and-they-will-come” mindset, which I think is clearly the case, We may be underestimating the impact of this. When you think of markets from the supply side, you frame out technologies you could offer, then try to find things that they can be used for. I remember, back in the 1980s, a meeting on ISDN (Integrated Services Digital Network, the first planned successor to plain old telephone services (POTS). One vendor came in, excited, and said “We have a new application for ISDN! It’s called ‘file transfer’!” Well, there’s a difference between what a service can be used for, and what justifies paying for it. ISDN learned that the hard way. AI is probably doomed to do the same.

]]>
6337
Why the Shift in Optical-Network Focus to Hyperscalers is Bad for Telcos https://andoverintel.com/2026/03/18/why-the-shift-in-optical-network-focus-to-hyperscalers-is-bad-for-telcos/ Wed, 18 Mar 2026 11:48:24 +0000 https://andoverintel.com/?p=6335 It’s hard to get love these days, if you’re a telco. Wall Street questions your business model. Standards initiatives you’ve traditionally depended on, like those of the 3GPP, seem to be turning to focus on what vendors want, not what you need. Now, Light Reading is saying “The optical industry is reorganizing around hyperscalers, and telecom’s voice is fading.” All this is bad ink for sure, but the last of the points may signal a very much more serious problem.

Is the optical industry reorganizing around the hyperscalers? Yes, because the goal of the optical industry is to sell optical gear, and hyperscalers are a growth market, where telcos are not. But why is that? Because selling capacity is a business that’s already commoditizing, and getting even more so. But why can hyperscalers grow their optics inventory, then? Because they don’t sell capacity. Cloud services are a higher-margin business, and when you have a good retail margin on something, you can afford to make improvements and investments to build your business. Telcos, relentlessly focused on somehow making bit-pushing a premium service, have failed to create anything with good retail margins.

So far, this is a story about why a company that sells a high-margin product can afford quality packaging. Most hyperscaler optical purchases have been related to inter- or intra-cluster pathways, and data center interconnect missions, which doesn’t seem much of a threat to the telcos. But….

The good-retail-margin business of the hyperscalers, cloud services, is directed at creating front-end elements for business applications, either facing customers/partners or their own workers (via SASE). Enterprises say that they are increasingly using the same applications and the same front-end model for workers, and that means that the connection to branch locations that’s now made by VPN services could instead be made from cloud to data center with a single trunk. In other words, cloud services could fairly easily absorb VPNs.

The largest number of optical trunks that many telcos deploy relates to access Ethernet connections, and of course the majority of these go to branch sites as a part of VPN services. If these services were no longer used, or even if the usage was significantly diminished, you’d be putting a lot of optical gear out of service. And there are other forces acting to limit access optics growth.

The number of sites that are a candidate for optical connections, other than consumer broadband based on PON, has generally been fairly stable in developed economies. Recently, with the advent of online shopping, we’ve seen very slow (if any) growth in the number of retail sites. This is important because sites that have devices generating traffic (POS terminals, for example) are more likely to want something better than consumer-grade broadband Internet.

VPN usage has also recently been capped, even reduced, by the use of SD-WAN, which is a VPN overlay on the Internet. Where good-quality consumer broadband is available, SD-WAN can support branch and even home office connection to the corporate VPN at a much lower cost, even where cloud services are not used as a front-end to enterprise applications and SASE is not involved.

For telcos, then, the problem of flight from VPNs could be exacerbated by the shift of workers to the cloud front-end of applications, accessed via the Internet, rather than to direct VPN access and data center front-ends. There is every reason for the cloud providers to promote this, since they make money hosting the front-end technology and don’t make money selling VPN services.

This is far from an unlikely prospect. Think for a moment about this: The prevailing view for years on cloud computing was “Everything is moving to the cloud”. I don’t believe this and never did, but suppose it were correct. If the cloud is the host of the future, then there is clearly no need for VPNs. None, zero. There is no need for anything except the Internet and access to it. Telcos would be confined to the least-profitable piece of their business, the most capex- and opex-intensive, which is access. Now, ask what the difference is between this scenario and one where the cloud absorbs not everything, but all front-end functions. Users still access “the cloud” via the Internet, right? The only thing that changes is that the cloud has to connect to data centers for the back-end piece of applications. So, looking at the US market specifically, we have over a million satellite sites that today are networked to VPNs. There are a bit over ten thousand enterprise data center sites, so the quantitative difference between cloud-eats-front-end and cloud-eats-everything is ten thousand connections out of 1.01 million.

The challenge that telcos face is that connectivity is only a way to distribute experiences that people want. It’s the experiences that they value, just as they value cars (and buy them) rather than roads, which they simply expect. The telcos have called this shift in wants and willingness to pay “disintermediation”, which seems to imply that some outside force has disconnected them from their rightful place. Not so; they disconnected themselves. The problem now is that the shift is threatening even connectivity.

There is no real risk to access connections; nobody really wants to be in a low-margin business, and cloud providers know very well that the cost of providing national broadband access would be far higher than they could expect any return on investment to cover. Long-haul connectivity is another matter. What you need to create a core network isn’t nearly as expensive, and further what the cloud providers already have to provide in order to support their distributed hosting is a big part of it. They can surely steal business VPN services from telcos, at least those sites where cloud front-ends dominate.

The question now for telcos isn’t whether they’ve fallen into another disintermediation trap with the cloud services and providers—they have. It’s whether the same thing might happen with edge computing. It’s very clear to enterprises that the only potential driver of massive new IT and network spending would be real-time applications, and it’s increasingly clear that some (Nvidia, notably) on the vendor side see this too. Realizing this real-time computing opportunity means deploying edge elements that are hosted, not on premises. If the cloud providers, or anyone other than the telcos, get this piece of business, telcos will be confined to an even smaller and less profitable niche than they now occupy, or will occupy if enterprise VPN connections shift to somewhere in the cloud.

]]>
6335
Enterprise Views of the New HPE/Juniper https://andoverintel.com/2026/03/17/enterprise-views-of-the-new-hpe-juniper/ Tue, 17 Mar 2026 11:06:45 +0000 https://andoverintel.com/?p=6333 The HPE/Juniper deal had a lot of promise from the first, and also a lot of risk. M&A in the tech space these days isn’t exactly a cakewalk, after all. But from the most recent earnings reports, it looks like the deal is navigating the reward/risk space reasonably well. There’s no reason to doubt the views of HPE’s CEO (Antonio Neri), stated on their earnings call: “Phase 1 of our Juniper integration is complete. We remain on track to achieve our fiscal ’26 synergy targets.” The big question is the next statement: “As we move to our second phase, we are focused on building a new networking market leader by aggressively executing our strategic product and software roadmap while driving revenue synergies through our go-to-market scale.” The key word is “strategic”, and I want to refer to a blog I did last fall to address it.

There are two ways that HPE could build a new networking market leader. One is to leverage the sales force and account relationships of HPE and Juniper to cross-sell. Sales of networking and server technology today, according to enterprise buyers, is largely leveraging enterprise plans to expand the data center. The other is driving new missions that create those plans. Tactical, then strategic.

What enterprises tell me is that HPE/Juniper is doing very well in the first of these areas. If you reference my blog, which talks about a Street presentation made last fall, the “Creating a new networking industry leader” step is succeeding for HPE/Juniper, without question, and this provides two key and almost immediate benefits relative to rival Cisco.

The first benefit is in networking. The cross-selling of Juniper gear gives HPE a way to leverage Juniper in its own server accounts. Where projects that involve network upgrades are in play, HPE/Juniper is more likely to see them and more capable of responding quickly at the sales engagement level, which is a good thing. Enterprises tell me that this is working for HPE.

The second benefit is in the server sales. Cisco has no significant server position with enterprises, no significant influence in that area. Since many of the data center network upgrades also involve server purchases, HPE is now able to use Juniper’s sales force to pull their servers into projects they would have missed otherwise. That, according to enterprises, is also working.

OK, that’s tactical success, then. This is the level of engagement that’s driving the current results HPE is reporting, and I think that’s consistent with what Neri says in his comments on the earnings call. What about strategy?

The only enterprises who offered any negative comments on HPE/Juniper focused on this area. Almost all enterprises are a bit frustrated by the AI hype because it’s making any attempt at AI planning difficult. They’d love to see someone straighten out the mess, and a big part of that is unraveling the relationship between AI overall, self-hosted AI, and networking. All vendors seem to say that you need to prepare for AI, prepare everything, but enterprises know that to make a business case, they need to prepare for applications, for missions. So they look to a vendor like HPE, who now has a unique mix of IT and networking, to help them strategize.

Strategic influence is something I’ve measured for decades, in some way or another. Through the entire period, two truths have emerged. First, data center technology is actually the biggest driver of network technology. If you talk to enterprises about why they might change how they do networking, the biggest reason by far is that they’re changing the applications and hosting technologies they use in some way. Second, IBM has consistently had the highest level of strategic influence of all data center vendors. In the last year, they scored roughly a third higher than any of the others (HPE, Dell, SMC).

Strategic influence is what lets a vendor get involved in a problem, a need, before it becomes a project. Their influence usually makes a major difference in how quickly a problem or opportunity can be translated into a business plan that then drives IT procurement. So, if HPE’s next phase is to influence strategy, it’s important to see whether their strategic influence is improving.

If you’re tired of a two-options approach, I’m going to disappoint you by introducing another. One compares their influence to Cisco, and the other against the broad metric of the ability to drive project evolution. Against Cisco, HPE is clearly superior in potential now, because Cisco rival Juniper has no significant enterprise server influence today, and HPE/Juniper has. However, whether that difference is important depends on whether the influence is actually bearing fruit in terms of new projects for which HPE/Juniper has a leg up. It’s less a matter of whether then can compete better on an RFP than whether they can induce one to be launched and wire it to maximize their own chance of winning.

In this regard, enterprises say that HPE/Juniper isn’t there yet, to the point where some are actively upset at what they’re hearing. Overall, I don’t see any statistical difference in HPE/Juniper’s strategic influence with enterprises, versus the two separately, and the “why” largely comes down to AI.

AI is the primary strategic driver of data center change, and so if network change. It’s not that AI necessarily creates things like traffic, but that it creates potential new business cases. As they used to say in the space program, no bucks, no Buck Rodgers. Enterprises are struggling with AI because their own project knowledge collides with the popular comments on AI usage. Enterprises don’t see much of a business case from cloud-hosted AI, nor do they see much of a traffic change or network change created by connecting AI users to self-hosted AI. They do see new AI agent missions that use AI to augment current applications and workflows, and these agent missions drive real data-center-network needs and potential increases in the number of users who might employ some applications, thus impacting the traffic levels and potentially the QoS even in the WAN.

For over a year, enterprises have told me that IBM is unique in understanding how they would see and adopt “AI agents”. They still say that about IBM, but they don’t yet say it about HPE. That makes what I think is the most critical point about future networking benefits of HPE/Juniper; you cannot drive new projects in AI without being able to drive AI agents as applications; networking AI is meaningful to enterprises only if they have AI to network.

I think HPE is trying to address this, but not in a systematic way, and more aimed (it seems) at large-scale AI providers like sovereign AI than at enterprises. I interpret the specific unhappy views of some enterprises to their disappointment with sales comments on the strategic points, which says that at least some salespeople are getting asked to provide insight and aren’t able to do that yet. This frames a risk; it’s one thing to fail to exploit an opportunity to realize strategic influence on an account, and another to fail to look strategically credible. HPE may have to work on that.

]]>
6333
Is Satellite Emergency Service More Disintermediation Risk? https://andoverintel.com/2026/03/12/is-satellite-emergency-service-more-disintermediation-risk/ Thu, 12 Mar 2026 11:24:33 +0000 https://andoverintel.com/?p=6331 Here’s an interesting comment I heard from an MWC attendee: “It’s interesting that the most buzz from the show came from a topic, Starlink Mobile, that represents telco Disintermediation 2.0”. I think it’s an interesting point.

Telcos have complained for decades that others have exploited their connectivity assets, demanding low prices for Internet, then building high-margin services on top. This was the original meaning of “disintermediation”, and it’s interesting that the term is now being applied to a satellite service set that doesn’t even ride on telco connectivity, but rather augments it. But in a more philosophical sense, it may be valid. Could satellite players offer emergency connectivity to telcos just to demonstrate to users that satellite is almost always available, and then expand that “emergency-only” role to eat persistent service dominance?

The “Featured Story” from LightReading for the show over the weekend was “At MWC, SpaceX execs tout Starlink V2 – and a key carrier partner for it.” SpaceX, speaking at a keynote, talked about the value of universal connectivity, not only broadband in areas where terrestrial infrastructure can’t serve, but also for “life-saving connectivity”, meaning emergency communications in those same areas. I think that’s a valid story, but it’s also one with implications.

How do you use mobile communications? Most of the people who tell me about their personal use (roughly 80%) of mobile combines “Internet” applications that in most cases are (or could be) connected via WiFi, with spontaneous personal calls/messaging that very often has to connect via cellular service. That mirrors my own usage; I don’t need mobile broadband most of the time because all I do when “mobile” is answer calls or texts.

OK, suppose that SpaceX or somebody else (like Amazon or Google, for example) offers nothing except mobile call and text? I get a phone number that always works, everywhere. I drop my normal mobile service completely, and simply connect via WiFi in fixed places where I really use other connected applications. Where does this leave telcos?

That telcos need to cut deals to offer customers for universal emergency connectivity shows that mobile services can’t fulfill all connectivity. Satellite services can, particularly if we limit their target to calls and texts only. If we assume that a satellite service was a part of a kind of VPN that would automatically (via the smartphone or device) connect via WiFi when there was such a service available, we’d have a model that would use relatively little satellite bandwidth, and one that for many could replace traditional mobile services.

Who might want this? Think almost any MVNO, but in particular some player like a cable MSO, some of whom already have WiFi extension to mobile service options. Or Google, whose Fi service uses T-Mobile cellular, who offers satellite emergency connectivity on some recent Pixel models, and who offers international connectivity. Anyone launching Internet satellite service, of course. Who doesn’t, or shouldn’t? Telcos.

Many of the younger people I know wouldn’t like this because they rely more on social media than on calls and texts, but could social-media providers offer some feature limitations in order to encourage satellite providers to integrate them into their call-text VPN? Why not?

Mobile services are used in different ways by different people. When public WiFi was limited, there was a lot of value to full-scale mobile broadband. Today, less true, particularly for those who don’t use social media as a substitute for continuous physical presence.

So isn’t this a justification for the 6G integration of satellite service and perhaps even WiFi with mobile services? Not unless the telcos want to accelerate disintermediation of their mobile services. The smart play for satellite players would be to encourage this sort of integration, in order to take advantage of WiFi or even mobile service to offload higher-bandwidth applications or service in areas where there are a lot of users who could load up satellite channels.

We could, nay probably are, headed to a time when instead of satellite being a small-scale emergency add-on to mobile service, mobile could be a specialty off-ramp for satellite, something to use if WiFi isn’t available to serve the mission. I think that telcos could have had significant influence in this area, but it’s too late now. The old adage that telcos fear competition more than they value opportunity has reared up and bitten them, and hard.

Satellite voice services came along in the early 1980s in response to high long-distance rates, which telcos kept in place to protect their profits. The telcos eventually abandoned those rates, because voice traffic had a minimal impact on network capacity in the Internet age. But their institutional memory kept pinning satellite providers as the competing enemy, and so they shunned making deals with them to extend coverage to places where even mobile infrastructure couldn’t profitably serve.

Similarly, telcos could have recognized that social media created an alternative to many calls and texts. If they had, might they have launched social-media-linked services that integrated call/text services into the social media site, rather than have the sites build a parallel service? Sure, but wary of “disintermediation” by OTTs who they saw as predatory competitors, telcos hunkered down on the old, and by doing so fostered the new in addition to losing the opportunity for new services.

Telcos, friends, are too slow, too cautious, too protective of the remnants of the past. Their own trade shows are becoming a showcase for others who are faster, more risk-tolerant, less rooted in current thinking and fearful of change. They’re increasingly controlling the agenda, and the last of the opportunities for telcos to seize any high ground is passing away.

If the satellite impact is real, it would destabilize the telco mobile services business, which is their most profitable, so it would destabilize the telcos themselves. We would almost surely have a major profit and infrastructure investment problem. Thus, there’s a public policy point to consider here. What happens if the current trend continues? Telcos would eventually have to become public utilities in the old regulated sense, with a regulator-set pricing and profit level. Or even be a government monopoly. Sound like pre-1980s, pre-privatization, thinking? It is. I think that what we’re seeing is that we went about those reforms wrong, just as we’ve done in many areas in “deregulating” other utilities like electricity, water, and gas, and perhaps even things like mail services. Is there a real, and unappreciated, risk in deregulating essential services? We should be asking that now, before it’s too late.

]]>
6331
Some Telco Views from MWC https://andoverintel.com/2026/03/11/some-telco-views-from-mwc/ Wed, 11 Mar 2026 11:31:37 +0000 https://andoverintel.com/?p=6329 As my telco contacts digested MWC, they offered an interesting consensus comment; 55 of 75 who commented on the show made this remark: “Open RAN can’t fix 5G/6G” which I think is an interesting comment on both. One that, obviously, begs the “Why?” question.

It’s clear to almost every telco that we’re evolving to a more “converged” view of broadband, one that is based on common infrastructure for fixed, mobile, satellite, and even WiFi. The clarity is spoiled by the fact that while that seems obvious at this point, and should have been a fundamental point in the design of 5G/6G architecture, that really didn’t happen for 5G and isn’t really doing that for 6G either.

“If you accept broadband convergence, the primary goal has to be reuse of infrastructure elements across every type of access. We don’t have that,” one telco noted. The big offender, they say, is mobile, but there’s also a need for what another telco called “the Grand Unifier”.

If you look at the architecture of 5G, you can argue that it really defines three things—the RAN, mobility management, and the core. Most of the telcos think that the “core” piece should not exist as a part of mobile infrastructure at all, but rather be a single common central element to broadband. Access features can map to core features, but they should never extend into the core. That’s an essential stating point, telcos think, but not the whole story.

Mobility management, if you cut to the chase, has two main elements—registration of devices and “finding” the access point for those devices as they move about. The function should be a boundary function at the point of connection to the service core, perhaps some metro-level point, where a “service address” that’s known to the service network, say the Internet” is mapped onto a tunnel that gets the packets to the right access point. Today, that’s mobile only, but many telcos think that it should be a universal feature, one that lets devices roam to any cell, any wireline broadband connection (through WiFi to a device) and satellite. This is a goal, most agree, of 6G, but the implementation may be an issue. What a select group of thoughtful telcos want is for this to be done by creating a different relationship between the control and user planes.

Does this sound like vRAN? Not according to the commenting telcos. According to them, vRAN is about turning functions into software, and of course it’s about the RAN. Telcos say that you have to start with the whole mobility management issue if you’re really going to optimize infrastructure for a converged future. That means taking all the user-plane functions and incorporating them in the router. If you’re going to create tunnels and steer/route onto and off the tunnels, you do that in the routers, so that all the data handling for all access options is built into devices that are already handling “traffic”. You then create an interface between the routers and the mobility control-plane features so that mobility management can create tunnels, direct traffic, and break them down. MPLS, they note, already does tunneling in routers, and many think that MPLS should be the mechanism of choice here.

Once you have the data plane centered in traditional routers, you can host the control-plane functions as software features on utility servers. This makes mobility management a true overlay element, and also enables it to direct service traffic to any access technology. However, there still might have to be some extra handling issues to face there, given that every access network assigns a device to an address. How do you manage directing a service address to an access address. One telco suggestion was to treat an entire access network as a private address space and push and pop address headers at the boundary. This could be a function of a router if the router could pull address translation from a database, somewhat like SDN switches would pull routing information, caching it for the duration of its need. The access networks would presumably manage this at the boundary.

What about the “open” part, like Open RAN? The rule, the telcos say, remains that control and data plane separation, with the former in software and the latter inside routers, would be paramount in designing the mechanisms for access, whatever they are. Thus, they’d want an initiative to open the RAN to continue the separation of control and data, and to use routers/switches for the latter. As far as the former, there is some interest in having all control-plane functions be software, but a recognition that as you get closer to the antenna, the value of not only openness but even of control/data separation reduces.

This attitude arises from two factors. First, the major telcos don’t want to integrate multiple vendors. Second, they don’t think that openness begats innovation in the RAN, because only the giants can really afford to innovate. “The concept of ‘open’ is just like any other tech feature, meaning it has to pay back overall,” one commented. “The media loves an open RAN,” said another, “but for us, not so much.” Most admit that the whole concept of openness ends up being justified as a way to hammer down prices, and a defense against a vendor leaving the market. Those justifications increasingly don’t work.

What appears to be true, surely for those telcos who offered me comment and likely for others, is that neither Open RAN nor vRAN is seen by telcos as a broad solution to their business challenges. Some see a path to creating an infrastructure model and service model in a 6G-ish sort of way, but with specific technology elements so far absent from, or even contradicted in, 6G standards. Others are simply trying to navigate the demand forces that drive a need for greater capacity, the lack of differentiated services that could command premium payment, and the growing pressure on them to constrain costs to stabilize their business model.

]]>
6329
Bound and Unbound Systems in Real Time Automation https://andoverintel.com/2026/03/10/bound-and-unbound-systems-in-real-time-automation/ Tue, 10 Mar 2026 11:29:02 +0000 https://andoverintel.com/?p=6327 My views on the importance of real-time applications for the advancement of tech overall, and for new telecom service opportunity generation, are well-known to those who follow my blogs. Over the last six months, 64 enterprise IT planners/architects have offered me comments on their own views and experiences in this area, and I think they offer a window into what’s really likely to happen in the space.

I promise enterprises anonymity in return for their sharing views with me, and that means not providing anything that might lead to their being identified. Since terminology on this topic is inconsistent, I’m going to frame the concepts in my own words to avoid giving away the person who commented.

Enterprises overall agree that real-time applications are key to any increase in IT spending or any changes in the telecom services they’re likely to consume. Twice as many say, for example, that this is the major driver in both areas than say that AI is even a “significant” contributor, unless AI is used as part of a real-time strategy. As I’ve noted in the past, almost all the actual real-time application progress has come in the form of orderly expansion of existing process automation applications, which today rely almost totally on local specialized edge systems using what’s typically known as “real-time operating systems” (RTOS) and running on systems optimized for placement local to the processes they’re supporting. These systems are what we can call a “bound process”, meaning that the process involved uses some form of mechanical system like an assembly line, substation, refinery, etc. This bound process can be represented by an IT-generated model that today would be characterized as a “digital twin”.

The collective comments of the 64 specialists indicate that the driver for change to these bound-process applications is the fact that, in nearly all cases, they are tightly coupled to related processes that are not hosted within the same facility. A factory needs to acquire parts/materials, and ship finished goods, both of which are external to the current applications. Where efficiency can be improved by linking all these interdependent elements of a “business”, the linkage can sometimes be handled by simply exchanging events/triggers, and often the linking processes themselves take measurable time, so these exchanges don’t require special communications resources. If there is a tighter coupling required, which is likely the case if the various interdependent elements are not co-located but are still proximate (within a larger facility, like a plant or yard, or perhaps even among metro-located elements) then real-time control of the interactions may be useful. This is the source of most realistic edge computing service opportunity.

Our specialists also note that when this sort of multi-process symbiosis is assessed, it is sometimes (or even often) the case that some of the new processes being assessed are “unbound”, meaning that they involve elements that are more autonomous in behavior, like human workers or things being run by them. A truck doesn’t run on a track, it runs on a road. Workers in a warehouse move according to a combination of policy/training, their own will and assessment of conditions, and the local conditions themselves. While it might be possible to create a digital twin representing unbound processes, it’s not a simple task of creating a model of a static set of elements with static relationships, as it would be in the case of bound processes.

The big barrier to including unbound processes in a process automation application is creating a process model for them. The best way to get that, all of the 64 agree, is by incorporating video analysis into both the creation of the model and the populating of real-world conditions into the model. That makes AI analysis of video the most important AI mission relating to new IT spending and new telecom service opportunities.

One example of this has already been announced, and I mentioned it HERE. Arda’s mission is apparently primarily model-building, but details are sparse at this point (47 of the 64 specialists had heard of it, but none had gotten any briefings from the company). This sort of capability would allow a company to place cameras to record activity in a space, or in something like a vehicle, and from that determine how the process was actually being conducted. The specialists doubt that this could be done without the benefit of human interpretation, or the ability to draw on the digital twin models of any bound processes in the facility, to relate movement and position too mission and task. Obviously, we need to see more detailed work in this area.

In any event, building a model from the real world isn’t enough according to the specialists. You need to be able to analyze video to populate the model with conditions, or there’s no way that the model can accurately reflect real-world behavior fully. A digital twin of a busy intersection might offer you a lot of insight into how the intersection might behave under various conditions, but not much on how it’s currently behaving. If the purpose of a process model is to facilitate the introduction of IT knowledge and IT-directed action into a real-world process, you need to know what the conditions are in the moment when that stuff is being introduced.

The ability to model unbound processes is critical to optimizing the impact of real-time applications on business efficiency, which means on willingness to invest in IT resources to do the optimizing, which means spending on IT, and spending on network service enhancements to expand the scope of the applications. A smart city needs to know a lot about what’s going on in the moment, or it isn’t smart enough.

This all raises some questions, of course, the biggest one perhaps being the impact of video analysis on personal privacy. A worker in a facility might have some concerns about some video-analyzing AI agent watching them like Big Brother, but this could likely be contained. However, spreading this kind of thing to public streets and to buildings, to augment public-safety workers for example, could mean that more of the general public are dragged in. Street-level camera surveillance is accepted and even sought in some countries, and resisted in others (including the US).

The bound/unbound system issue is something that enterprises are starting to address, and it’s already demonstrating that it has major implications in terms of both targeting and technology. Given that system models, digital twins, are complex in themselves, adding in the dimension of how they’re populated effectively in the real world threatens to delay their realization. Fortunately, there are initiatives that are starting to provide technical solution pathways, if not final answers, to these problems. There’s a lot of money on the table in this space, so it’s important.

]]>
6327
MWC, COBOL, and Tech Fables https://andoverintel.com/2026/03/05/mwc-cobol-and-tech-fables/ Thu, 05 Mar 2026 12:48:07 +0000 https://andoverintel.com/?p=6325 What do MWC and COBOL have in common? Two things, one obvious and one not. The obvious link is AI, which is the dominant conversation at MWC. The not-obvious one is that the so-called obvious is covering up the important and real stuff.

Let’s start with COBOL, which is an acronym for “Common Business-Oriented Language”. I’ve done a lot of COBOL programming in my career, and I’m confident I could still code in it if I wanted to bother. The current focus on COBOL is a result of claims that because AI has a new tool to translate COBOL to another language, it threatens the whole software industry, or at least threatens IBM’s mainframe incumbency. Nonsense.

COBOL is probably the easiest programming language to learn and use, because it has an almost English-language syntax. I’ve coded in a lot of languages, and I am of the firm belief that any competent programmer could learn it in a week and work in it confidently.

The reason that’s important is that if COBOL were an albatross hanging around the necks of CIOs in IBM shops, they could have easily addressed that by simply changing the code they already ran, or compiling the programs on a different platform. You don’t need AI translation of code, people, you can get COBOL compilers for everything from PCs to Linux servers, in both commercial (including IBM) and free/open-source form (try GnuCOBOL if you need something). So if somebody wants to flee a mainframe, and if COBOL programs were the barrier, they’ve had the pathway to leap that barrier from the first, and still have it, independent of AI.

So why all the hype about this? Some of it is simply the result of click-seeking; you need stories so you grab on to something that has click potential. More people read about threats than about hopeful developments; good takes care of itself but bad has to be managed. Some is more complicated.

Wall Street is driving a lot of this nonsense. IBM’s stock has been on a roll because IBM alone, of all the big AI players, has actually had the story straight from the first. They have the highest enterprise-agent self-hosting success rate according to enterprises. You’d love to have gotten in on the ground floor on their stock, but most probably believed the hype that IBM was a dinosaur. I was actually asked to write a story on that before the stock took off on AI success, and I refused because I knew that it was B.S. Anyway, hedge funds have another chance, and a profitable one. They hit IBM’s stock with short-selling, drive it down, and make money when they cover their shorts with now-lower-priced IBM stock. They then buy, and when IBM goes up again, they make more money. If they can encourage the IBM-COBOL-dinosaur hype, they increase the potential for a big IBM drop, and the money they’ll make. And in the Internet and the click-obsessed world it created, they can do that easily.

Nonsense pays, at least for some.

Which gets us to MWC. What is a conference like that intended to do? Publicize, sell. Vendors spend a lot of money exhibiting there, and buyers spend a fair piece of change attending it. All this investment has to come with a return. For vendors, it’s visibility, attention. For buyers, it’s education, exposure. For both, there has to be something compelling going on, or nobody will care. The easiest way to get that is to ride a good hype wave, which is why we have an MWC focus on the intersection of two. One, 6G, ties into the ever-present hope that the Next Big Thing in mobile will redeem telco capex. One, AI, ties into the current popular buzz. Link the two, as Nvidia has surely been working to do, and you have a Great Attractor.

What gets covered up, then? Besides the obvious and general answer, the truth? Behind the scenes, under the hype, there’s an alternate reality that happens to be at least related to the truth.

The problem with hype waves is that they crest and fall into the drink. That may be fine from a story perspective; the news that AI has died would be just as click-worthy as that it was going to kill you, your family and friends, and humanity in general. For vendors, though, it promises a stock crash that could destroy them, even though the hedge funds would make money on it. So, underneath it all, there’s stuff going on to build the Thing that Will Emerge From the Deep when the AI wave crashes. It may not be as dramatic as the original hype wave, but it could save a lot of vendor grief. So we need to know what it is.

I think it’s clear that there are a lot of players who believe that telcos need some form of business salvation. AI and 6G are the preferred pathways to that, but only because everyone wants a deus ex machina (for those not familiar with the phrase, the Oxford Dictionary defines it as “an unexpected power or event saving a seemingly hopeless situation, especially as a contrived plot device in a play or novel.”) sort of solution. That works in novels and plays, but not in the real world.

We hear that the issues with telcos, the barriers to AI, are data coherence, APIs, lack of skilled personnel, and so forth. All of that is true and false at the same time. Yes, those are issues, but not the issue. Every project faces them regularly. The solution is to make changes, spend money, and the issue is the “spend money” part. Could 6G revolutionize infrastructure? Yes, if telcos spend on it. Could AI transform telcos and enterprises? Yes, if they spend on it. The problem is that in order for that spending to make any business sense, there has to be a suitable return on investment. Make a 6G revolution, and nobody will be able to afford it, because we know based on 5G experience that you can’t “build it and they will come”, you have to wait till they want/need it, then build it. With AI, what all the stories come down to is that investing billions in AI models could make something real, but would it do what we’re already doing better, better enough, to pay those billions back?

6G revolutions would make the vendors happy, but telcos have already said they don’t want a 6G outcome that requires major infrastructure upgrades. So, no revolution. AI replacing humans would make the AI pundits, and probably the media, happy (up to when AI replaced them, at least), but suppose that a giant AI data center can do the job of a human? Is spending a billion to get a human outcome when humans are already in the jobs a smart move? Oh, you say, the data center could replace a lot of humans, but if that’s the case then where is the need for all this new AI investment coming from? Where are the cost savings so far? Sure all this MWC 6G stuff and AI-consciousness stuff is entertaining, but are we really talking about multi-billion investments justified by the fact that it’s fun to read about them?

API work, open-source elements, in 6G could eliminate the telco fears of integration and operations complexity. The same could open enterprise core business data for AI exploitation, safely. There are initiatives in progress that would do these things, some that are even aimed at doing them, but you’ll not likely hear about them because they’re defensive positions, things behind the attractive hype that’s intended to be the survival bunker for when the hype fails. Talk about them now and you’re shouting “The Emperor is buck naked!” Then your stock tanks, because the hype was the here and now, and the fallback is a step on a longer and more boring path.

The good news is that outside MWC (and not in COBOL, I’d put in), there’s real progress happening. A former OpenAI bigwig is reportedly starting a company (Arda) to automate manufacturing, and it will analyze video to create a “digital twin” of an environment, then train robotic elements to work within it. This is a rational use of AI to connect itself to real-world processes, a critical step to an optimum future. We also had news of an AT&T initiative to replace AI applications in netops that were based on giant LLMs hosted in the cloud, with small models hosted on their own resources. This is what enterprises have been talking about for AI all along.

Hype waves on 6G and AI are likely to leave us all high and dry, unless…unless somehow some of the incremental steps that are contemplated, even announced, accidentally turn out something incrementally useful, or some player gets smart and tries to do that deliberately. There are signs, both in announcements like the one I just cited, and others from Nvidia and AMD in their telco-related announcements, that there’s a chance of both. The question for MWC, and for the AI community, is whether this will happen in time, or whether the waves of 6G and AI will crash over them.

]]>
6325