Telemetry News Now.
Welcome to the latest episode of Telemetry News Now. We are fast approaching May two thousand twenty five, so spring is in the air.
However, I'm feeling a little bit down, Justin, because, I just purchased a, a new MacBook Air for myself, a personal, MacBook, and it was about a thousand dollars, you know, regular full price, basic model, thirteen inch, because trying to travel with the full sixteen inch MacBook Pro that weighs about seven hundred pounds. Mhmm. Having that on a plane is is ridiculous. So I bought this little MacBook Air.
They don't give you a lot of space on an airplane these days.
They do not, especially for those that fly economy like me. And it's a great little computer. I love it. I actually took it on a plane already and enjoyed being able to take out my my computer for once. But then I get an alert this morning that there's a sale, and it's a hundred and fifty dollars off.
So I'm a little annoyed. I don't know if I can get a price adjustment. I'm not sure how that works, but that's where we are today. I'm a little preoccupied.
Tech refreshes are always fun. Timing that is near impossible. Right?
Yeah. Absolutely. So let's dive into today's headlines. So starting off in an article from Network World dated April twenty ninth, and that's yesterday for us as we record today on April thirtieth. At the twenty twenty five RSA conference, Palo Alto Networks introduced a bunch of updates aimed at securing enterprise AI environments.
Now a big part of this is the launch of Prisma Airs, A I R S, an AI security platform designed to protect AI models, agents' data, and, and even applications from threats like prompt injection, model tampering, data leaks, that that kind of thing. ARRIS includes features like AI model scanning. I'm not exactly sure what that means. I did look around on the Internet to get a little bit more color there.
Red teaming, that's very interesting. Runtime security, posture management. And then they also announced new enhancements to its Prisma SASE platform, which we're pretty familiar with, including a secure browser that extends protections to where users access applications and uses some sort of AI under the hood to detect and respond to threats. So clearly, with these updates and with the planned acquisition of Protect AI, which is a a different article in network world but related here, you know, and analysts estimate that deal to be worth about six hundred and fifty to seven hundred million, but I don't think they disclosed that.
They did not.
In any case, I think it's pretty clear that Palo Alto, they're making an aggressive push here to be the go to solution to secure what some would consider a vulnerable AI ecosystem.
Yeah. I mean, it's fascinating to see how deep into the AI itself they're actually going, right? It's not like they're just securing the network ports or, you know, acting as a WAF for people accessing the front end of a like a chat for an AI bot, right? They're actually literally going down and looking at malicious code, like you said, prompt injection, people trying to hack the prompts to get data to leak. The model scanning, yeah, I don't know enough about that. I would love to learn more about that, but there's definitely some actual things they're doing to protect actual AI workloads and model training and so forth, very similar to what you would do with code scanning to look for vulnerabilities and stuff is the way I kinda took it away from a bit of a layman's, terms.
Right.
Right. And, you know, I don't think that a lot of these attack vectors are brand new necessarily. I mean, it's just that we are talking about building workflows that weren't necessarily ever built before in the networking space and and in enterprise IT in general. And so, you know, there might be some data engineers and and data scientists and ML ops people that are already familiar with a lot of these things and that are already doing their due diligence to secure their workflows.
But it's time to kind of expand that outside of that niche into enter the broader scope of enterprise IT and then and in our case, into into networking and what we do in network operations. But, you know, if you think about it, the entire, MLOps workflow, we're talking about your data cleaning and preparation. So there's databases to secure an RBAC and, data in transit, data in motion and then the regulatory concerns around that data. So there's that. There's also the idea of, you know, what are the, the models that you have running and that whole runtime environment of what they're actually doing and what what agents they're able to use and then what tools those agents can actually use and locking that down.
Data leak is always that's data leaks are always an issue in general in data heavy environments and and now even more so. So I don't think any of this is necessarily new. It's just that we're expanding the scope into other areas where we gotta we gotta do our due diligence, our best best practices, really. And then, you know, yeah, we're gonna see folks like Palo Alto and other vendors as well that have security BUs or that are security companies outright build their canned platforms, their solutions to do that, you know, for you as a service.
Well, I think, you know, hopefully, this is gonna help with a lot of the concern that the industry has around AI and how do we use it safely, You know, how do we protect ourselves from some of these type of attack vectors, right? So still, like everything is a really a cutting edge and evolving area of technology is AI in general. But more that companies are investing in doing it securely, the more we can have some level of trust that we can actually leverage the benefits and the power of this technology in a safe manner.
So Yeah.
And and we're talking about almost a if if the analysts are correct, almost three quarters of a billion dollars investment here. So, certainly, there is money being put up for these kind of endeavors. So, certainly, people are taking it seriously, Palo Alto being what we're talking about now. But I'm sure we're gonna see more, and we've talked about Cisco in the past.
We're gonna we're gonna see more and more of this. And not just in terms of, like, which model is more secure or which model should we use or not use, but, you know, we're talking about beyond LLM wrappers for your application and that kind of cool stuff. We're talking about entire, you know, artificial intelligence workflows. So there's so many moving parts that there is a level of complexity that I think folks are gonna struggle with unless they've been doing it, you know, as professional data scientists for their career.
Right? And even then, they're focused on applying the model, not on the security piece. Right?
Mhmm.
Yep. So moving on, from Reuters, dated April twenty eighth, Alibaba just reached I'm sorry. I just launched QEN three, an upgrade to the previous QEN AI models and featuring what they describe as hybrid reasoning capabilities. And you can imagine that this is in the midst of growing competition in China's AI sector. So think about recent releases from, like, Baidu and DeepSeek.
Now this new release builds on the company's rapid release of QEN two point five max in January, this past January, and some other releases. And I personally consider, the QEN two point five family models some of the better models out there for my own use up to recently. I've been experimenting with GPT, o three recently, which I really like. But in any case, I think that this also speaks to their sense of urgency to keep pace with their own domestic challengers internationally, of course, but their own domestic challengers and what I guess we can only describe is this new and growing AI arms race. Mhmm.
Obviously, I think a lot of news came out about DeepSeek a while back, so I was familiar with that one. And this article is really about, Alibaba's new Quinn three, as you said. But the thing I had missed, I guess, in all of this was the announcement from Baidu.
I guess it was back in, what, January or no. It's actually just last Friday. Looks like that Baidu, announced their Ernie four point five turbo. So, yeah, like you said, a lot of AI arms race in China for trying to capture the domestic Chinese market, but, you know, also presumably trying to capture some some international market too. So Yeah. Absolutely. Very interesting to there are so many players in that.
Sure. For sure. And we're seeing that coming out of China, and it really does come down to China and the United States at this moment. So also from Reuters this week, Meta so moving on to, an American company. Meta launched a new Llama API to attract businesses and developers to build AI products using, you know, its own llama llama models. So this is obviously with the goal to position itself against competitors like Microsoft, Google, and, you know, more recently, DeepSeek and maybe Quinn. Right?
Mhmm.
And that llama's new API was unveiled at Meta's first AI developer conference recently. Now though the API API is it's initially available right now, it's like a limited preview. So I guess we can expect a broader rollout probably in the next few weeks or months, I I guess. And also from Reuters, Meta executives emphasize the API's superior customization capabilities compared to its competitors. That's important.
Remember that Meta just released several new models. We talked about it on the show a couple weeks ago. And so, you know, clearly, there's a lot of effort being made here to get open models out there. In this case, we're talking about LAMA, that compete at the same level with some of the largest closed foundational models. So think GPT and, Claude and and and those models.
This is interesting. You know, we're just talking about QEN three. Now we're talking about LAMA and their advancements there. And remember that these advancements aren't necessarily like, you know, how cool is this large language model that it can write this funny poem. Right? It's how can we leverage this large language model that's becoming better and better? Have you noticed, by the way, Justin, that, like, hallucinations aren't really much of a thing anymore?
At least hear people talking about them as much anymore, for sure.
Yeah. But it's really these advancements. And how do we now incorporate these other activities into this LLM wrapper, whether it be QEN three or or, you know, GPT o three, things like that, and include the function and tool calling and other kind of behind the scenes agent activity or or whatever they're trying to do to make them available for greater workflows beyond just going into your web UI and, like, asking it questions and having it write you a blog post or something like that. Right? Mhmm. So in this case, Llama in you know, some interesting stuff. And I think we're gonna see a lot I mean, I already use Llama quite a bit, but I think we're gonna see an uptick in that, especially because they there is a variety or rather there are a variety of much smaller lightweight models that, make it easy to fine tune and to transport and do other things with and run locally.
Yeah. Yeah. And I think you've mentioned to me before one of the other reasons you enjoy a llama is that it's pretty lightweight and you can use it offline, like on your own local machine to do some some training of data.
So Depending on the model, llama has some heavy models. They have one, just that, called Behemoth. It's gigantic.
As the name implies.
Yeah. Yeah. As far as a new model, but they have some lightweight models for sure with a fewer number of parameters that you can run locally, like you said.
And are those, now exposed by an API too so you could build your own offline tool chain presumably with this stuff, or is it only the larger models that are exposed by this API? Do you know?
You can run all of it offline if you want.
And, you know, that the idea though with the API is that you can integrate it into your whatever AI application that you're building, like you said.
Sure. Yeah.
And we're talking about downloading compiled code. You're not downloading four hundred billion parameters, necessarily.
Right.
Right. But but you can, run the smaller models locally. And and so as far as, you know, the benefit here, it really does come down to the customization, the ability to take something and make it truly custom for your environment, your application.
I'm working with somebody right now who's got a, a model that runs with just a tiny, tiny fraction of the parameters that we're hearing from from some of these other models that are out there. Mhmm. And you can incorporate the customization and the efficiency to still get very highly accurate results without the bloat. I I say bloat in air quotes because sometimes you need it, but without the bloat of running a gigantic multibillion parameter model.
Yeah. And presumably, if you're gonna do this offline, then you have a little more control over your data and some of the data leakage stuff we're talking about the previous article is not as much of a concern if you're running it, call it, quote, unquote, in the cloud or as a SaaS Mhmm. Offering. Mhmm.
Yeah. I mean, there's use cases there for sure to run a model. I mean, this is not what the article was talking about, but, you know, we're talking about it enough. You can run a model offline for sure if you have those kind of regulatory and security concerns.
But a lot of the time, you're just running it in your cloud instance, and that's still private, and and you have the scalable infrastructure available to you there. So that's good. But a lot of the time, it's also okay to just use an API, and you have those enterprise agreements, and you have the legal wrapper around your security concerns. And and that's sufficient for for many, if not most organizations out there. So I have no problem with running with running, you know, these models as a service, you know.
Yeah. Well, speaking of Meta and their AI, the next article is from TechCrunch talking about an interview between Microsoft CEO Satalia Nadala and Mark Zuckerberg, where Mark Zuckerberg asked Microsoft CEO how much of their code is written using AI, and the response was somewhere between twenty and thirty percent. So what, Satya is saying is that twenty to thirty percent of the code Microsoft, they believe is written by AI. Now it's not really clear how that measurement is done, but, Nadal actually turned around and asked Zuckerberg how much was written inside a meta generated using AI, and Zuckerberg said he's not actually sure.
So, you know, it'd be interesting to see if they're gonna go off and and research that. But, the article went on to say that Google in their earnings call last week mentioned that about thirty percent of their company's code, that's Google's company code, is also written by AI. So that's not really that shocking of a number for, Microsoft if you figure, you know, they're a competitor of Google. Let's call them roughly the same size when it comes to the amount of code they're probably dealing with, you know, to have twenty to thirty percent of that written by AI.
But just thought that was fascinating.
Yeah. How do they measure that exactly?
No idea. I don't know.
Yeah. Yeah. I mean, there was a time when you could sort of tell, like, text was AI generated, although that's gotten way, way better.
Had some patterns you could kind of pick out for using the same language.
And so I I don't know at this stage of the sophistication of models, you know, code generating models, how we measure that, how they measure that. I could tell you one thing, Justin.
Ninety seven point three percent of the code that I write is AI generated.
I I'm gonna give myself that last two point eight.
And ninety seven percent of your stats are made up.
Oh my good. No. I I I wish that was totally made up, but I would say that's pretty darn close. The vast vast majority of code that I write, which is all stuff in my lab anyway, the vast majority of it is, at this point, initiated by an AI prompt, and then I and then I massage it from there or copy and paste it from Stack Overflow. But very, very little is, is just like handcrafted anymore, which I'd still have to, like, Google what my way through anyway. But I will say it's been an incredible tool for me personally.
So, you know, I can only imagine how we're gonna see, AI generated code be the vast majority of professionally written developer code moving forward even at these web scale companies.
Well, at least at the as the first pass. Right? I think to your point, like, coming up with the program logic and sort of the first, call it a rough draft of the code, the first pass of the code Oh, yeah. Absolutely.
AI does a pretty respectable job of that. Right? Coming up with the logic and the function calls and all that kind of stuff. And then all you have to do is go back and debug it and make sure that it's accomplishing what you wanna accomplish.
So it makes you much more efficient. I mean, a lot of people would probably react negatively to that saying, oh, that's kinda cheating. I don't know. Is it cheating, or is it just using the tools that are available to you to make you more efficient?
Yeah. I don't think it's cheating. And and, you know, we're not I'm not necessarily talking about, like, throwing a prompt into, like, chat g p t and then copying and pasting the code from there, although you certainly could. But, you know, we're talking about, like, the integrated tool in, like, virtual visual studio, excuse me, or cursor or or or copilots and various other applications, things like that. So they're they're built in an integrated kind of tools. So they really are assisting you in your developer workflow, in your developer journey there to to write something.
Yeah. I mean, again, if it's making you more efficient and faster at doing what it is you're trying to do, I spend hours beating your head against the wall when you can have AI help you do it faster, especially for you and I who while we do write some code to get some things done here and there, we're not professional software developers. Right?
So And imagine somebody back in the day, like, when I was, like, tab completing commands on a Cisco command line.
I'm like, you're cheating. It's like, really? Come on. Getting the job done. I understand the concepts.
Yep. Alright. The, switching away from AI now. The next article is from The Register talking about how Asia as a continent has now reached fifty percent of IPv6 capability and leads the world in user numbers.
So this is actually a really interesting milestone twenty five years after this region first began its IPv6 journey. That's part of a quote from, the director general of APNIC Mhmm. Mhmm. Saying that, you know, it's been now twenty five years since that region began their journey.
They're now hitting the fifty percent mark for IPv6 capabilities.
Doesn't mean that fifty percent of all devices in the APNIC service area are IPv6 reliant just that fifty six I'm sorry. Fifty percent or more, of the the ASNs that are advertising addresses are now covered by IPv6.
Yeah. Yeah. And and, you know, keep in mind that we're still very much in a dual stack world. Right? And we're gonna be for quite a while. So it's not like, oh, no. You know, on the horizon, we're gonna be all I p v six, which I know there are some I p v six enthusiasts, evangelists, whatever you wanna call them, that are very, very hopeful for.
But this isn't a result of, like, all of the infrastructure investments and policies say you have to use I p v six. This is basically because there's, like, billions and billions of people. And so so I kind of think it's a result of that probability and, you know, we're we're gonna have more more of these dual stack devices, of course, and then also, IPv six allocations in general. And the limitations to IPv four, in general. Right? Mhmm. That we're seeing.
Yeah. For sure. You know, and I thought one of the more interesting things in the article is they ranked some of the countries in Asia as far as their I p v six capability percentages.
Phil, I know you've read the article. What, would you have guessed that India was the top country?
I mean, yeah. Like, India and China based on the number of devices. Right? Because we're talking about one point two, one point three billion, people that live there.
I don't know how many actual devices, and I'm gonna presume that the majority of those are mobile devices. Mhmm. So based on the number of devices, India, China, those would have been my guesses. So yeah.
Yeah. Actually, China is not even in the number two spot according to the data from APNIC. Vietnam is actually in the number two spot behind Vietnam. I found it interesting.
China's I don't know. I don't have it sorted by capabilities, but I'd say they're like five or six if I'm just kind of scanning down the list and actually Indonesia, is the lowest. So I don't know. Just kind of interesting to look at how it ranked.
Didn't seem wasn't what I would have expected like you said. Mhmm. Just based on population, and number of mobile subscribers I would expect there to be in the country, I would have thought India made sense to me, but China would have thought would have been a little higher up the list, but I guess just, maybe there's been more conversion in places like Vietnam and Japan and some of these other places.
So Mhmm. Mhmm.
Alright. Last but not least, we have an article from the AP News talking Associated Press News talking about a recent power outage that took place in Europe on Monday. They had power outages sweeping through Spain and Portugal and just a little bit of France. I know some of our colleagues here at Kintik were affected, were impacted by these power outages and we're not able to get online. We're not able to, you know, to do their day to day.
I know our own Doug Madore put some social media post out, showing some of the early stuff that our data is able to show. And I believe he's working on a blog post that he'll be putting out with a much more deep analysis on the impact that these power outages had on traffic and routing on the Internet. So, I look forward to reading more about that, but it looks like Spain lost fifteen gigawatts of electricity or roughly sixty percent of the demand on the country's forty nine million homes or forty nine million customers, I guess, for the power grid. So it was a very major power outage for for those three countries, Spain, Portugal, and and France.
Yeah. Yeah. And the reasoning for it is unclear, I saw.
Mhmm.
There were fluctuations and voltages, being monitored and detected well before the blackout happened. I can only imagine that it's a very complex combination of things that conspire together. You know, I read something about a major or two major disconnection events. So does that kinda hint toward equipment failure?
Maybe there was a maintenance window where something went awry. I don't really know if it was you know, I don't know. But, I did get a chance to get a sneak peek at Doug's blog post and draft, but I got to take a look at it and read that according to Kentik's data, Internet traffic dropped by about sixty eight percent in Spain and seventy four percent in Portugal. So obviously, having a massive power outage like that did directly impact Internet connectivity.
We gotta figure that, you know, data centers and that kind of thing are probably gonna have generator backup, so they'll be able to stay online for some period of time unless they run out of fuel. But your residential folks, the end users are likely not gonna have that lot of Correct. That infrastructure won't be on battery backup.
So Yep. Yep. And, and that affected, you know, beyond just the, you know, the peninsula of Portugal and and and Spain impacting some international character carriers, excuse me, and even disrupting some traffic to Moroccan telecom networks. So there was a broader regional impact, the ripple effect of of the grid failure beyond there.
Yeah. The article hinted towards the reason that France was impacted being maybe that they get their power from either Spain or or Portugal because, you know, as we know in Europe, a lot of things are interconnected, not just, commerce and politics, but the electric grid is actually interconnected so they can share power between the various different countries.
So, you know, again, we're still a little light on exact details of what caused the power outage, but looks very likely that the impact in France was a result of something that actually happened physically in either Spain or Portugal.
Mhmm.
Alright. So moving on to upcoming events. Starting off, we have Knowledge twenty twenty five. That's ServiceNow's user conference that's coming up next week, May five through seven. And, Justin, I believe that you're gonna be attending. Yeah.
I will be attending with our CEO, Avi Friedman. We're gonna go, talk to some of our customers out there and, support our partner ServiceNow as part of that conference. Great.
We have the Ohio networking user group happening on May eighth in Columbus, Ohio. We have CHINOG, that's the Chicago NOG, unrelated to the NUG. Sorry for the confusion there. Happening on May fifteenth. Justin, I believe you're also attending that one.
Yeah. I'll be going to that one. That's one of my favorite conferences. The program committee there always does a good job of lining up speakers. It's an all day event and one day event. Unlike the nugs that are usually an evening thing, this is a daytime thing runs, I don't know, say roughly eight to five.
Right. Right. Okay.
AutoCon three is coming up at the end of May, May twenty six through thirty, I believe, in Prague. That is the network automation forums event, that happens twice a year.
Justin, I believe you and a couple of people from Kentik are going to that one and delivering a workshop.
Yeah. We'll, we'll have a team there. We're gonna put on a workshop on network observability, network intelligent basics, go over what that means, what data needs to be collected, and give the attendees the ability to get their hands on the keyboard and try out some of these technologies. So looking forward to that. There's still spots available. So if you haven't signed up for the workshop, you haven't chosen one, we'd be honored to have you join us and sit through our presentation.
Great. And, you're killing me, Justin. You're gonna be everywhere. I see here the Missouri Networking user group is on June fifth, which I know you're the organizer. And I believe Scott Robon, who was just on last week's Telemetry Now, also the founder of AutoCon, is gonna be the keynote.
Yeah. I'm just one of many organizers, but, yep, I'm one of the local organizers for the Missouri Nug, and Scott has been kind enough to volunteer to come out to Saint Louis to attend that and give a keynote talk on his concept of total network operations. Folks haven't heard him talking about that on his Packet Pushers podcast. It's a passion project of his passion concept of his that he'll be talking to the audience at the MONUG about on, June fifth. So Great. Looking forward to that.
Yeah. Yeah. Justin, you're certainly getting around over the next few weeks and months, so, enjoy being on the road.
So for today, those are the headlines. Bye bye.