Join host Paul Spain as he discusses the exciting advancements in processors and AI capabilities from Intel and HP. Paul explores the incorporation of neural processing units (NPUs) in new chips, their impact on performance, and the potential for local AI processing in laptops. Additionally, Paul delves into the competition between Intel’s core architecture and ARM-based chips, shedding light on the Snapdragon X Elite and its potential impact on Windows devices. Join us for this in-depth look at the future of processing technology.

Apple Podcasts  Googlepodcasts  Spotify RSS Feed

Special thanks to organisations who support innovation and tech leadership in New Zealand by partnering with NZ Tech Podcast: One NZ HP Spark NZ 2degrees Gorilla Technology

Episode Transcript (computer-generated)

Paul Spain:
Greetings and welcome along to the New Zealand tech podcast. I’m your host, Paul Spain. Today we’re delving into the very latest from intel and HP, and we’re going to touch on some other technologies coming through from other vendors, too, and whether we’re moving into really an exciting new period in particularly our laptops and our core sort of computing devices with the advent of new processes, new chips and built in AI capabilities. First up, big thank you to our show. Partners to one NZ, Spark, 2degrees, HP and Gorilla Technology. So look, what we’ve had announced from intel over recent months is new processes which incorporate an AI element, AI capability which is called an NPU or neural processing unit. And that’s coming through in these new chips that intel have launched, and these are coming through right now in laptops from a number of vendors, including HP. I’m currently today, and have been for the last week or two, using a Lenovo Thinkbook with one of these new chips, the intel core Ultra five.

Paul Spain:
So the slightly different naming from, you know, what we’ve been used to with the intel chips in the past. And really this is the sort of, I guess, the next phase of things going forward, we’re gonna be very used to having this local AI capability within our machines. Now, let’s have a little bit of a chat around an NPU, which is a neural processing unit or an artificial intelligence sort of processing capability within your processor. And what we see is our processors today often have multiple sort of functions. They can do normal computer processing. They usually have some graphics capability for displaying 3d graphics and so on to a degree. And then the most recent is this addition of the NPU, the neural processing unit. And what it allows is artificial intelligence sort of workloads to be carried out locally, if you’ve got that in your computer.

Paul Spain:
So compare that with traditionally for AI with, say, chat, GPT or something that’s generating imagery for you. Anytime you want to do that workload, you’ve got to take the data, package it up, and this all happens behind the scenes, of course, and send it off to the cloud. It processes and sends it back. Now, this week in Sydney, HP Australia and New Zealand launched their newest devices featuring these new generation intel chips. And you will start being able to get your hands on these pretty much straight away. They will become available both across consumer products and sort of business and enterprise class products. Of course, gaming is part of that consumer side as well. So a number of sort of questions I’ve been chatting through with people.

Paul Spain:
Do we need any sort of AI processing in our computers. Do we need the increased performance and the battery life that is coming with this generation of chips? I would say a lot of people are finding the performance of their current machine. If it’s less than three years old, they may well find that it’s actually performing just fine for them. But really what matters is when we’re investing in new technology, we want something that’s going to have a bit of a lifespan to it. We don’t want to buy new technology that’s out of date. And then so over the years ahead, we find it gets slower and slower or less and less capable for our requirements, because, of course, when you buy a computer, usually going to use it for a number of years, and over that period, the software is changing all the time, whether it’s things like windows, whether it’s your core office applications, things like the Adobe suite, whatever it is you use within your workday, or if it’s a home device, whatever it is that you do on that equipment. So I’d say, yes, I’m always looking for something that’s newer and faster. And this addition of local AI processing around, whether we need local AI capabilities or that local neural processing unit, when certainly as of today, most often when we’re using AI systems, that processing is happening in the cloud.

Paul Spain:
So let’s drill into that one now, because I think it is one of the big topics. So do we need that processing? I would say in some ways, no. No, we don’t actually need a neural processing unit right now. But I guess the caveat to that is just how quickly that we’re seeing AI technologies evolve. And when we look at the costs associated with some of these technologies, a portion of that cost relates to the computing power, the computer time that we’re actually using in the cloud. If you’ve ever used chat, GPT, or some of these other tools extensively, you’ll find certain elements of them, particularly when it comes to graphics processing and generating imagery, these things that are more processor intensive. There will usually be a limit on, say, how many images that we can produce with a given tool in a given period, such as within a monthly subscription cycle. And that points to the cost to produce this in the cloud.

Paul Spain:
So one of the possibilities that I’m seeing right now, and I’ve had a number of discussions sort of behind the scenes and off the record with varying people from chip makers and laptop manufacturers and so on around this topic is, well, what does the future look like if Intel’s investing in putting a local sort of AI capability on the chip if Apple and others are doing something similar, what’s the point if everything’s happening in the cloud? And the suggestions I’m hearing is that there’s certainly some distinct possibilities ahead. In fact, there may well be some reasonably substantial announcements coming up in these regards in the next little while, so we will wait and see. But what I’m picking is there is quite a possibility that the likes of Microsoft, Adobe and varying others will start wanting to take use of that local processing capability, and that having that type of chip, such as an intel core ultra five, seven or nine could well be helpful, particularly those that are doing this sort of heavier AI type workloads. And we’ve seen some pretty interesting capabilities coming through when it comes to AI generated video as well as the imagery and the varying things that that you might soon be using AI for if you’re not already. So I think this is an interesting space to watch. One suggestion that was made to me, look, what if there was an option between paying, let’s say it’s dollar 35 a month for your AOI tool of choice to have it run locally, or dollar 50 a month to have it run purely in the cloud? And there’s probably going to be some crossover in some of these tools as well, where it might use some of the local processing and some cloud processing. But if that were the case, and that’s $180 a year sort of price difference between two things, these are somewhat hypothetical numbers, this could make a real difference and be a real benefit for having that local neural processing unit running. And yeah, look, I think it’s going to be fascinating to see how this actually plays out.

Paul Spain:
Of course we know that the likes of Microsoft are wanting to generate as much as possible from their Azure cloud, but there are challenges in terms of how they balance these things, how quickly they can grow and scale, and they may well be quite keen to lean in on some of that local processing. So the advantage that you get with the neural processing unit is a reduced wait time because that’s happening locally on your computer rather than going out to the Internet coming back, you potentially got some security benefits there where the data doesn’t have to be handed off to another party because it can happen within your computer. And we may well see over time some real cost benefits where if you’re not sort of bogging down someone’s cloud system and you’re doing that processing locally, maybe we will see that sort of bring down the cost of, of some subscriptions, although time will tell on that one. Let’s wait. Let’s see what announcements are made on that front. But I think overall pleasing to have that neural processing unit incorporated into this new series of devices. So when we talk about chip architectures, the two that we’re most commonly interested in today when it comes to computers and smartphones and so on, is the ARm architecture, which is most common in our smartphones and has traditionally been focused on low power consumption especially. And there are a lot of manufacturers that make ARm based chips or processes because ARm holdings licenses that architecture or that technology to other companies to use.

Paul Spain:
That includes the likes of Apple and Qualcomm and quite a number of others. Now, we can compare that with the intel core architecture, which is what’s been the mainstay of our PCs and laptops for a long time, varying sort of iterations. And probably the key difference traditionally has been that for full blown sort of computing devices that, look, you get much more performance out of an intel core processor than you would from an arm chip, which is more suited to phones. Now, over the last few years, what we’ve seen is this sort of coming together when we look at new laptops coming through from the likes of HP and others. Making Windows based laptops is really what’s the future of Intel’s chips? Because, and I’ve talked about this on the NZ tech podcast before, I’ve spent a lot of time on Apple’s M one, m two, now m three based devices, and are finding that these laptops give great performance, yet sometimes twice the battery life of what we will see on a Windows based laptop, which is quite staggering. So there’s been that sort of pondering what’s going to happen. Can intel step up to the plate and really compete strongly against these ARM based chips that Apple are using because they leverage that architecture? Or are we going to be leaning into vendors such as Qualcomm that is making Arm based chips specifically for Windows devices? Well, the Snapdragon X Elite is the. Is the new chip.

Paul Spain:
And I’ve been seeing friends in other parts of the world that have been getting hands on with devices based around these new chips. And I gotta say, they are sounding very, very competitive with what’s out there from Apple and from intel. So what are we hearing? We’re hearing that the Snapdragon X elite can give those sort of battery, the long sort of battery life that we’ve seen from Apple’s M series processors that are in the most recent sort of three iterations of the MacBook Air and MacBook Pro. From a performance and a battery perspective. And these are coming to Windows devices. Microsoft, of course, already have announced new surface devices. And really what I’m picking is we will see the vendors such as HP, Lenovo and others launching laptops in the months ahead with the Snapdragon X Elite. So nothing official, no announcements on that basis yet, but just sort of reading between the lines, that seems very, very likely to happen and in the not too distant future.

Paul Spain:
So that’s going to make our purchasing decisions, I guess, once they come through a little bit more, maybe a little bit more complex, or maybe it will make it easier. But why would it be more complex? Let’s say the Snapdragon X Elite ticks all the boxes from performance and battery life perspective. Well, then it comes down to how well does Windows actually perform on these chips? In the past, Windows and Windows based apps have not run well on ARM based processes. And it’s always been something of a frustrating experience, usually from a performance perspective as much as anything else. But the feedback that I’m seeing shows the Snapdragon X Elite as being really competitive. We now have a reasonably capable portfolio of apps that will run natively on Windows for ARM chips. So probably the most recent announcement that I saw is that Google have launched an ARM version of the Chrome browser, which will be one of the most used applications, if not the most used application, on Windows today. Microsoft Office.

Paul Spain:
That’s been available in terms of an Arm iteration of that now for some time. And so largely Microsoft’s applications have been made available for ARM. We’ve got Google in there, and of course you can, through an emulation capability, also run other applications that are maybe designed for an intel processor so that those things will run on Arm. Now, there will be some variations and some limitations as we’ve seen in the past, but if we’re really getting that great performance and great battery life, I think this will be the push as it was when Max moved across to the M one chips and away from intel. That’ll be the push to software developers to, to get moving and to make ARM versions of their apps available, which in many cases is not necessarily a really big deal. But there might be, or there will be some apps, I’m sure, that will take longer, or just old applications that don’t come across so well. And that’s where that emulation capability exists. So that’s kind of my, my overview.

Paul Spain:
I think the things I’m waiting for is will Microsoft Copilot chat, GPT and across the Adobe suite and the other AI tools that many of us are using which of those are going to be available to tap into a local neural processing unit, that local AI capability, that’s where we start getting extra benefit out of these new intel core ultra processors, this new generation. And look, how’s things going to play out between intel and at this stage? Qualcomm with the Snapdragon X elite, as we start seeing computers released down that track, what I would say for those that are in organizations where you’re thinking a fair way ahead, really you should be trying out all of these machines, getting a feel for what works within your organization, and especially as you have people in particular areas that really maybe need to do a lot of AI work, you’ll be wanting to monitor the benefits of having that local AI. And of course, one of the other aspects of being able to do AI processing locally is the security that’s associated. Right? If you never have to put that data into the cloud, if it never leaves your machine, as long as your machines are well secured, then you’re not having to deal with dramas and risks such as what we saw from chat GPT or OpenAI last year where people were logging in and seeing other people’s prompting chat GPT with and those responses. So it’s definitely got a couple of plus points to it, but ultimately software vendors have to be leveraging it and that’s what we’re looking for. You know, I’m picking AOI capabilities within, you know, the Adobe world. We’ll probably be starting to make use of this local processing in part because, you know, Adobe were part of HP’s announcements this week. And so, you know, I see that as somewhere where they probably aren’t so focused as Microsoft are on running a cloud in quite the same way as Microsoft are.

Paul Spain:
They might well be maybe more open to hybrid approach of doing more processing locally. But look, it’s certainly quite possible that Microsoft will be tapping into these neural processing units as well. So that’s it from me, Paul Spain. Thanks for joining me on this episode of the NZ tech podcast. We have had a little bit of interruptions over the past few weeks, but it’s great to be back again and we look forward to catching up next week. We’ve got some really great guests coming up over the next few weeks and of course, a big thank you to our show. Partners to one NZ, Spark, 2degrees, HP and Gorilla Technology. Catch you next week.