Listen in as Stephen Phillips (Snyk) joins host Paul Spain to discuss security vulnerabilities in VPNs, firewalls, and AI-generated code and the importance of backup strategies. Plus, a look at some of this week’s Tech News including:
- A new $1-Billion Kiwi startup
- Solar storm disruptions
- Crime group Huyton Firm’s downfall
- Google Cloud deletes UniSuper’s online account
- OpenAI’s ChatGPT -40
Special thanks to organisations who support innovation and tech leadership in New Zealand by partnering with NZ Tech Podcast: One NZ HP Spark NZ 2degrees Gorilla Technology
Episode Transcript (computer-generated)
Paul Spain:
Hey, folks, greetings and welcome along to the New Zealand Tech Podcast. I’m your host, Paul Spain, and great to have Stephen Phillips from Snyk back in the studio with us. How are you, Stephen?
Stephen Phillips:
Good to see you, Paul. Great to be here.
Paul Spain:
Yeah, look, there’s so much going on. This is actually sort of quite. I don’t know, it feels like a more exciting week than usual. There’s always cool things to talk about, but some good stuff coming up. A kiwi startup that most people probably never heard of. I only just found out in the last week that they were sort of kiwi startup here to buy kiwi. So that has raised valuation well north of a billion dollars because they’ve just recently raised over a billion dollars in funds. So that’s very exciting.
Paul Spain:
A whole bunch of other things around. AI, security, VPN’s and whether you can trust your cloud provider to keep your data and not accidentally delete it. That one is rather, rather an interesting one. But of course, big thank you to our show. Partners to One NZ, 2degrees, Spark, HP and Gorilla Technology. So, yeah. First up, Alex Kendall of Wave. I did not know that Wave was a startup founded by Kiwi, but they’ve been getting a lot of media attention locally in New Zealand over the past few days, and rightly so.
Paul Spain:
This is a company I’ve been following for a while and the reason that I’ve been taking interest in them is because of their innovative approaches to sort of solving the autonomous vehicle problem. And there’s been, you know, a bit of, you know, coverage and so on on them from time to time. Bill Gates featured them about a year ago and is sort of Gates note blog. And it’s been interesting recently to sort of watch what’s happening with driverless cars that are not using Lidar. They’re just using sort of lower cost sensors, effectively cameras to do the work and are trying to do it, I guess, in a more sort of human like way. And, yeah, when you look at that Bill Gates video and other things online, you see an AI company that appears to be kind of getting it right when it comes to autonomous vehicle tech. So, yeah, really, really pleasing to know that that’s headed up by Kiwi.
Stephen Phillips:
Yeah, it’s great, basically, that someone else is actually taking the same route and sensor fusion is probably what it’s all about. And if you can do this with lower cost. Yeah, it’s a great, great outcome.
Paul Spain:
Yeah. Cause you’ve had basically mobile eye and Tesla have been the, I guess, the big players that have been iterating on this, and I think we talked last week and this situation that we’ve landed on, which I think is it’s kind of weird, we’ve actually hit this. I would say it’s quite an earth shattering big moment. You know, autonomous driving AI’s where we’re now having the users of this technology, the users of Tesla’s tech, where they’re finding that they’re passengers. And we had two cases where the partner of someone driving their somewhat autonomous Tesla with the latest beta was saying on these two different occasions that I came across where the partner’s saying, look, take your hands back off the wheel and hand it over to the AI. It drives better than you do. I think this is a big line that we’ve crossed, right, where actually there’s that much confidence because certainly if you try, say, the autopilot we have in New Zealand, I don’t think there are too many passengers that would be anywhere near making that sort of comment. As I put your hands back on the world, stop trusting that dodgy AI.
Paul Spain:
It’s going to kill us.
Stephen Phillips:
Right?
Paul Spain:
And so many people have joked around sort of the Tesla autopilot and FSD technology in the past trying to kill them and. Yeah, but we’re now at the point where it’s like, no, take your hands off the wheel, hand over to the AI. It can do a better job of driving than you. So I think that’s a really big deal. And looking at the technology from wave and how it drove around London and the video with Bill Gates anyone can find online, that’s twelve months ago and that looks at least as good, if not better than what we’re seeing right now. Today from Google. Now. Sorry, from.
Paul Spain:
Well, from probably all of the other players in the market and they’ve only, I think, been around about five years. So they’re really getting some stuff right. The fact that it’s not sort of commercially available and so on, obviously there’s some things still to still to do before we see them in vehicles. And it takes time to sort of cut those deals and put it all together. But, you know, boy, it looks very promising. Looks like we’ve got some great competition ahead.
Stephen Phillips:
Yeah. So as soon as you get more than one player, basically, and the licensing is right, you’re going to get it to the masses. So all of the car companies will be able to pick up on it and likely on commodity hardware.
Paul Spain:
Yes. Yeah. So it shouldn’t be crazy expensive to actually build into the vehicles, certainly over time. So, yeah, that’s great to have something from a kiwi perspective. Other thing from, I guess it certainly had an impact locally is the solar storm going on. And, you know, it was a treat to be able to go and go out over the weekend and see an aurora from Auckland. It wasn’t something that I sort of ever expected to see. Did you get a chance to see it at all?
Stephen Phillips:
No, I didn’t see it myself.
Paul Spain:
Oh, you missed out. Okay.
Stephen Phillips:
Should have had my camera out facing south, apparently.
Paul Spain:
Yeah. So, yeah, I probably wasn’t completely up to date with the news until. Yeah, I was looking online, it was saying, oh, 06:00 p.m. To 09:00 p.m. Was the best time to see it. I was like, it’s ten or eleven. But anyway, went out for a drive and got to see it. So that was good.
Paul Spain:
But as part of, I guess a sort of a risk mitigation, trans power actually effectively sort of turned off some of our power in New Zealand. And there was some Tesla related dramas going on. So there were people sort of reporting offline issues with their sort of gps and whatnot. I was driving down near Motat in Auckland, the Museum of Transport and Technology in the Auckland Zoo and was wondering why here I was on a sort of 50 kilometer an hour road and the Tesla thought I was supposed to be or the speed limit was 100 k’s an hour. So I did have to take a little bit of control over the autopilot at that stage to save myself getting into too much trouble. And I was also experiencing with autopilot a lot of phantom braking, which is something I’d sort of forgotten about because I don’t remember experiencing it with the Tesla autopilot for, you know, for a long time. Maybe others have different experiences but, you know, in some online discussion others were sort of saying they’d had similar sorts of things. So.
Paul Spain:
So there maybe is some. Some truth that these solar storms can maybe do cause some technological problems. What’s been your experience? Did you notice anything or are you aware of things in the past where technology’s been impacted?
Stephen Phillips:
Well, I’m riding a bicycle at the moment, so I’m not having the experience myself. But I do recall back in the eighties there was some really big solar storms and it took a number of years to get to the bottom of it, but software was actually changing. So people were having changes in their bank account balances and things like that, even back basically that far back because of what turned out to be bit flipping.
Paul Spain:
So what was in the memory? And a computer would actually change what was being changed.
Stephen Phillips:
And that was the last time we had a solar storm at the same sort of magnitude of what we’ve just experienced. So, yeah, just probably a lot more. A lot more instances of problems now, because that’s where parity, sort of corrected memory basically came into play after that, that event.
Paul Spain:
Ah, very interesting. Okay, so you were actually able to tell if there was a. There was actually an error or a change in the data. And of course, all this was predicted by the inventor of monopoly. You familiar with that?
Stephen Phillips:
No.
Paul Spain:
No. Well, there’s a particular card. You pick up a monopoly that says, there’s been a bank error in your favor, guy.
Stephen Phillips:
Oh, brilliant.
Paul Spain:
$100. So I’m assuming that they knew this was gonna happen and that’s how that came about.
Stephen Phillips:
I’m sure it is, yes.
Paul Spain:
Now onto VPN’s. Now, the virtual private network, or VPN, has long been touted. There’s lots of podcasts that they’re sponsored by a VPN vendor, and they insist that this is how you’re going to keep safe and keep secure. And of course, there’s a lot of aspects to keeping your data safe and keeping secure and so on. But anyway, we haven’t been sort of touting VPN’s on the New Zealand tech podcast and trying to sell them, unfortunately, because news has come through that there’s actually been a big hole when it comes to VPN’s that, well, dates back to the dawn of VPN’s, effectively. And so, yeah, I was reading about this last week. I think the sort of the news hit and the situation in which this can come about is where you maybe don’t have complete control of the network. So it’s more likely to happen, or most likely to happen if you’re on a, say, a public Wi Fi network that’s been compromised.
Paul Spain:
And we all know to be cautious about public Wi Fi network. So a little bit of a reminder here that, yes, public wi fi networks still have their issues and their dramas. And effectively, you connect to a network where a bad actor maybe has control of the giving out of IP addresses and other associated information, what is referred to technically as the DHCP server that can give out some information that forces your communications that you thought were going to happen over your VPN to not actually go over the VPN and therefore not be secure. Is this a surprise, Stephen?
Stephen Phillips:
No, it’s not a surprise at all. Pretty much every time we’re told to sort of use these secure devices, it turns out to be probably one of the more unsecure ways of actually communicating, because just about all of the VPN providers have been owned multiple times over the past number of years. So, yeah.
Paul Spain:
Why do you think that the VPN and the firewall providers sort of, you know, really have struggled so much to provide, you know, a consistently sort of secure service? And if, you know, we so see so many sort of bugs and issues both on the VPN providers, like those ones offering consumer VPN services to keep your secrets safe, as well as the firewall vendors.
Stephen Phillips:
Yeah. Well, it probably comes back to around 2025 years ago. A lot of the libraries and all that were being written at the time weren’t being written with security in mind. They weren’t tested for security. And those libraries are still the basis of many of the products and all sort of in production today. So once you find a particular weakness in one of them, then the security researchers, especially the bad ones, get out there and they go, where there’s smoke, there’s fire, and they dig in, they find other ways to compromise the libraries. And we’ve been getting every month there is another vendor, basically, that sort of gets caught out by this. And the government in the US, like through CISA and the likes, is actually discouraging companies from actually using these.
Stephen Phillips:
And in fact, there’s a whole big sort of initiative at the moment from CISA to move to a secure by design approach. And a number of these vendors that have been compromised multiple times, 68 of them, have signed up to be part of the program so that they can learn how to. How to do things better.
Paul Spain:
Well, as long as we learn from these lessons that we move forward, then we’re headed in the right direction. I guess we’re never going to be in a situation where all of these things are perfect, are we?
Stephen Phillips:
No.
Paul Spain:
Yeah. Yeah. Now, quite an interesting story I came across. BBC were reporting on around organised crime group that was effectively sort of brought down by their own communications. What they understood were encrypted messages, text messages, and it ended up sort of uncovering a whole lot of stuff. You know, there was a violent feud over stolen drugs and so on. And this related to the crime bosses and the Hayton or Hooten firm. And they were named after part of Merseyside where, you know, where they were based and they’d been running for 30 years.
Paul Spain:
So, you know, three decades this crime group had been operating led by two secretive brothers. And, yeah, quite interesting to sort of delve into how they were speaking very, very freely with the encroachat messaging system, which, you know, we’ve talked about in the past, you know, they believed everything was sort of safe and secure. And of course, you know, what happened was that system was taken over by law enforcement and they were able to sort of see all of these supposedly secure communications. So, yeah, it’s quite fascinating to read in an actual sort of story because we knew about that happening. But to delve in how this sort of decades old crime group were basically busted was, yeah, quite a fascinating, quite a fascinating read. You know, messages that included one of the parties ordering violent attacks and murders and, you know, one of the individuals there, Vincent Coggins, you know, also boasting around, personally slashing a businessman with a knife, the brother sending photos of blocks of cocaine and discussing a deal for a half a half ton of cocaine, which apparently was valued at over $30 million. So, yeah, quite nice to see when the technology achieves something good like this.
Stephen Phillips:
Yeah, and it’s not the first time there’s law enforcement in the US and Australia had been known to put up secure encryption apps on the play stores and on the mobile phone stores and they suckered in a whole lot of organized crime groups and thousands of people have been put away off the back of that. Yes. Yeah, it’s great to see the innovation being used by our law enforcement.
Paul Spain:
Yeah. And look, these things can be used positively in really innovative ways. That’s good. But we have a sort of ongoing, I guess, sort of related discussions around encryption and governments kind of wanting to tap into normal, everyday encryption mechanisms. I know the UK was really sort of pushing on this and I think meta came back and said, look, we’re not going to allow you access to our systems. And, yeah, that’s probably quite a deep dive that we could do into that world, you know, but we do come from a situation where, you know, traditional communications in terms of text messages and audio calls, you know, police have been able to, you know, get a warrant in the past and be able to listen in on those things. But as we moved into a, we move into a world of, you know, strong encryption mechanisms, that whole picture kind of, kind of changes and I’m not quite sure how it’s going to really, how it’s going to play out because we’re sort of getting. Yeah, we kind of getting pushes and pushbacks and so on.
Paul Spain:
How do you feel about the risks of sort of opening up encryption if, you know, if we allow governments to effectively sort of demand access, can the Internet really work properly if you go down those tracks?
Stephen Phillips:
I think certainly for a number of decades, now there’s usually been backdoors in most of the cryptography sort of libraries and arrangements or protocols. And I think we do lose some form of protection if authorities can’t do that. So I certainly fall on the authorities have to have access basically to all of these things, but there needs to be protection. So that needs to be balanced with making sure that there is. You have to get an appropriate warrant to do that. So the whole things of dragnet been able to pull and look at all the information. I think that’s going a step too far. But once you’re actually down at intercepting specific communications with a warrant, I think there should always be a backdoor and all that.
Stephen Phillips:
But the challenge is we don’t control all of the cryptography standards. So there is always going to be some risk there that the bad actors basically take advantage of that until there’s a vulnerability in the software. And there’s always a vulnerability in the software and then they’ll be able to sort of exploit that.
Paul Spain:
Yeah, yeah. And it’s always sort of been in my mind also. You know, we talk about crypto. Cryptography is cryptocurrencies and it sort of like they’re treated like they’re this perfect thing that don’t and maybe can’t have any, you know, bugs. But if you, you know, if and when, and I always sort of presume it, that’s, you know, at some point you get something, you’re going to get some, you know, some issue because software’s never perfect. But hey, maybe, maybe with our, our cryptocurrencies there’s no drama and we can’t lose this stuff. And I guess that’s where the blockchain comes in. And in theory it provides that immutable record of what’s happened and only good things are allowed to happen.
Paul Spain:
But it has always sort of sat in the back of my mind what happens when someone finds a way and people’s currency can just kind of, you know, disappear because you’ve got a good side to not having third parties. But there always must be some level of risk with that, I’m sure.
Stephen Phillips:
Well, for the last number of years, there’s always been. The crypto exchange of the week has been breached.
Paul Spain:
Well, yeah, that too. So it’s all very well. Even if the cryptography works. Well, there is that flip side, isn’t there?
Stephen Phillips:
Quite often, you know, law enforcement have been able to recover stuff as well because of the ability to audit what’s happening. Companies like trail of bits that can do really good forensic analysis of that to figure out basically who are the actual parties who have ended up with the benefit.
Paul Spain:
Yeah, yeah. And that’s where. Yeah, I guess it’s that aspect of, you know, you talk about the crypto exchange of the week sort of thing. I guess it’s. Yeah. How your crypto is controlled. And it is one of those aspects, even when I think about some of the, the bigger players kind of coming into the space, banks and the like, that are big reputable firms and you can see mechanisms that they could absolutely sort of put together to keep themselves safe and safe and secure. But are they right? And so I think if you, if you get into the world of crypto, make sure you, you have a look and have a think around how you’re going to keep your crypto safe and.
Paul Spain:
Yeah. Leaving all your funds online, live with an online exchange. Well, if they get hit, then you’re in trouble. So I guess that comes down to controlling your private keys largely. And we hope that there aren’t any sort of broader issues.
Stephen Phillips:
Yeah, I think banks are still struggling with how do they protect payments with normal bank accounts at the moment, let alone crypto accounts? It’s going to take quite some time for the regulator to actually move things along to the point where the actual banks or the crypto banks are actually forced to take action.
Paul Spain:
Yeah. Yeah. Now, there’s been some interesting coverage, and I haven’t actually sort of seen it too broadly when I first saw it, but maybe it’s gone on a bit broader now. But this came up late last week. It’s around a firm who basically they’re running everything on Google’s cloud, and that doesn’t seem like anything that’s kind of too surprising. What was surprising was that somehow Google seemed to delete their account and all of their data. Now, I’m sure this sort of thing will, you know, will happen with a sort of small business from, you know, time to time. They’ve got in a bit of trouble, can’t pay their bill with their, with their cloud vendor and maybe the, the business is kind of at the, at the end, it can’t pay any bills and stuff, so shuts down, the data, gets lost.
Paul Spain:
This situation was very, very different to that. So company unisuper and they were responsible for trying to see the figures. It was as many, many billions of dollars. And, yeah, their systems were dumped by Google. And look, conveniently, you would think that they had their data replicated to, to another location with Google. Oh, that’s good. Well, if it’s kind of gone in primary location, you’ll just go to the redundant location. But no, this was some sort of formal delete and removal of the account process.
Paul Spain:
So the replicated data also was completely dumped. And so here we were with Unisuper’s CEO having to email the fund’s 620,000 members. This was I think Wednesday night, maybe last week or the week before. And yeah, they were having to go cap in hand to customers. They obviously already had enough data to be able to email their customers, but basically they had to go to a third party backup, which fortunately they, they did have good third party backups, as everyone should have. And they were able to, utilizing that backup mechanism, were able to restore their systems. But yeah, I think it’s a reminder that, you know, an entire cloud subscription, you know, can be, can be deleted. And the fact that in this case the communications weren’t just coming from the company themselves.
Paul Spain:
Google were also part of the announcement, I guess as an indicator that this was an issue that was beyond just that company and their cloud vendor. In this case Google Cloud had made a mistake. So I think some lessons in there for all of us.
Stephen Phillips:
Absolutely. Sort of really makes you think, you know, we’re being told all of the time to have a second 2nd factor for identifying ourselves. Just as important, probably more important to have a second factor for doing your backups. And you heard it here first on two Factor backups. Two factor backups on the New Zealand tech podcast. That’s what we should all be doing. Multi cloud is quite a common thing in the tech sort of our world today, but a lot of organizations don’t actually have that second factor. So good ups basically to unisuper for actually doing that.
Stephen Phillips:
It’s certainly a risk that many companies haven’t thought about.
Paul Spain:
And yeah, it’s even one of those considerations with you might be running say a backup on your email and your file storage, be it with Google or with Microsoft, but you want to have a look at where that sort of stuff is going as well. There are all sorts of negative possibilities. I found the number on unisuper. How many funds did they have under management? Looks like around NZd200 billion. So it wouldn’t have been a trivial situation if they didn’t have that backup. I’m picking with the sort of pace at which they’ve done this that they had some pretty robust business continuity processes that were well tested. You know, because we know organisations that get hit with cyber incidents and don’t have well tried and tested routines will often take weeks and weeks to get systems back up and running. And it’s really only through following good business continuity processes and testing and incident response planning that you get into a position that you can recover things in a swift and timely manner.
Paul Spain:
So very pleasing to hear that the outcome was good. But yeah, really some big lessons for everyone there because there’s a degree to which we look back, you know, 20 years when computers were sort of new. Yeah. Everyone learned that lesson because they got burnt in one way or another and they learned that lesson. You need to have backups, but in many ways backups have sort of got so reliable they don’t. Issues with backups don’t touch us that often these days. So it’s important to hear these stories around, hey, these are the sort of exceptional cases and things can happen. And yeah, you can’t necessarily trust that one vendor or another is going to perfectly look after your data for all sorts of reasons.
Stephen Phillips:
Yeah, and it’s not just the actual cloud providers that you need to look at here, it’s actually the backup service itself. And the bad actors are now starting to, through the software supply chain attacks, they’re going after the backup titles that are being used to do the backups. And, you know, just in the last week we’ve had companies like Veeam, which, you know, many New Zealand sort of small medium businesses rely on, that’s had some compromises or vulnerabilities and all to it. And then going back ones, 2030 years ago, Arcserve basically has also had ones as well. So you’ve really got to think about how do I diversify my risk by having all of these things in place. Maybe even a third factor might be required at some point.
Paul Spain:
It’s a really good reminder and also a reminder of the importance of going through a robust software selection process when you’re choosing vendors. So you’re picking one that’s on the. As far down the trustworthy perspective as is appropriate for your situation. Absolutely. So, yeah, really important lessons here. Now onto AI. It’s been a bit of a week when it comes to AI announcements. And I had tv, tv three come in here yesterday for a little bit of an interview because OpenAI had made a new announcement around their chat, GPT 4.0.
Paul Spain:
And it’s faster, it’s free now. So you used to have to pay to get onto the version four GPT multimodal, they’ve demonstrations of it, singing a bunch of bits and pieces in there. This multimodal, I guess it was the multi modal so that it’s not just text or audio input, but text, live video, audio input and output. Now, to a degree that caught my attention because this was the sort of multimodal element which was really the big flag that Google were waving when they first announced Gemini. When I heard the announcements yesterday, it really gave me the feeling that OpenAI aren’t actually quite ready with this technology and that they’re just making the announcements. Yes, we know they’re sort of progressing and they’re working hard all the time behind the scenes. But it was the day following was the Google I O announcements. And then we heard about Project Astra, which is Google’s vision for future AI assistance.
Paul Spain:
We heard around a whole bunch of things, some of which is ready and some of which isn’t like Project Astra. And look, it’s all really exciting stuff, but they seem to be trying to out announce each other rather than actually out delivering each other. Am I being over dramatic on this, Stephen, or what are you thinking?
Stephen Phillips:
I certainly think it’s all about basically they want to win the hearts and minds of people and they’re not worrying about the truth of our delivery at the moment. We’re well into the top of the hype cycle at the moment. Which ones can you believe? I don’t think either at the moment. And you really have to try it out yourself. And I suppose that’s the purpose that they’re looking for at the moment. They want people to engage, but you.
Paul Spain:
Can’T because they’re showing off stuff that’s not here yet. As a paying chat GPT user, I thought, well, let’s fire it up and have a look at this 4.0. And yes, you can see some of it, but the stuff that they were really piping where it’s singing to you and so on. Sorry, it doesn’t, you know, it doesn’t work. And I guess, you know, Google this time after they were, they were embarrassed last time around with their sort of gem on I announcements and, you know, sheepishly told everyone a day or two later, you know, oh, this is just actually a made up demo. We can’t deliver it. You know, we’re quite upfront. Oh, this is our vision for the future.
Paul Spain:
And so, you know, they’re admitting that what they were showing we cannot get, and it is not actually available. So it’s a bit of a sign.
Stephen Phillips:
Of the times at the moment. Can you trust an organization to tell the truth anymore about what’s the current reality of what your capability is? It’s a real problem now. I think we’re suffering from that.
Paul Spain:
Yeah. Yeah, I think, I mean, some of the, look, I think AI has a whole lot of, you know, a whole lot of good possibilities to it. But there is that pressure for these firms to be number one, to be generating the revenue, to be who you’re going to choose to go with from an AI perspective. And in multiple areas, they’re either overhyping it or falling short. Don’t get me wrong, because I’ve said it, tv and so on, that look, I think everybody needs to be educating themselves about the latest AI. We need to be experienced in utilizing the technology so we understand, well, how’s my future going to be impacted and how can I be on a good side of it and not somebody that’s being impacted too negatively? But yeah, it’s a bit overdone. And not just even OpenAI and Google, we see it from Microsoft too, with how they’ve sort of hype up the ability of, say, their co pilot in a teams meeting and you fire up copilot in a teams meeting and you’ve got half a dozen people sitting in a, in a meeting room with a, you know, a teams room system and maybe one or two remote guests. And the half a dozen people get treated as though they’re one person.
Paul Spain:
And so, you know, it doesn’t do it. Whereas you can say fire up Microsoft Word and ask it to transcribe and it says speaker one, speaker two, speaker three if you’ve got, you know, three people that are, that are dictating to it, for instance. So yeah, we’ve definitely got some work to do on, or the vendors have some work to do on the AI front for it to maybe match up with the way that they’re selling the technology.
Stephen Phillips:
Yeah, I think there’s this lag in regulation as well. There aren’t consequences really for a lot of these sort of megacorps until it gets to the point where they’ve used monopoly powers to the point that regulators want to break them up. And I think we’re starting to get close basically on that with both Google and Microsoft at the moment.
Paul Spain:
Fascinating times ahead. And don’t take me making a little bit of fun of these companies to discourage you from looking at the technologies, experimenting and seeing what role that they may play in the future because look, they’re going to keep improving. What we see today is the worst state that AOI will ever be going forward. It just gets iterated on, right?
Stephen Phillips:
Writing, documenting how your software is supposed to work, user guides, manuals, all those types of things. Huge amount of benefit for organizations to automate a lot of those processes which, let’s face it, pretty, pretty drudgery type work. So if we can use AI for doing those types of things.
Paul Spain:
Super helpful.
Stephen Phillips:
Super helpful.
Paul Spain:
Yeah.
Stephen Phillips:
So much productivity gained.
Paul Spain:
Now putting on your Snyk hat. You work for Snyk. What are you seeing with Snyk? You’re helping those that are writing software, they’re coding for internal or external sort of situations, often with pretty large teams. Right. I think we chatted before in the past, you tend to work with the organizations that have those big software teams, of which there’s a bunch around New Zealand and Australia. Where does AI fit in with that? And where do your tools fit in with helping organizations with their software development from a security perspective?
Stephen Phillips:
Well, there’s a lot of organizations are probably 50, 60% of organizations or more. Their developers are experimenting with it and at scale with AI generated code. With AI generated code. And it’s quite clear at the moment, with the large language models that you need to have appropriate guardrails in place to stop the hallucinations telling you that it’s secure code when it’s actually not. So we’re in a unique position that we’re able to partner basically with a lot of the IDE providers where developers write their code and have got plugins that work with all of those. So regardless of whether you’re using the Google code assist or the Microsoft sort of copilot or GitHub’s copilot, those types of things, or AWS and the like. So we’re that sort of swiss sort of intermediate. We can actually be a guardrail for all of those type types of things to pick up on those problems, whether it’s written by an actual person or written by generative AI, we can help protect organizations.
Stephen Phillips:
And that’s why a lot of the larger banks in the world have really chosen to actually sort of get us to come in and help them with that so they can get the productivity gains from using the AI, but at the same time not increase the risk and stay out of the way of the developers so they can continue to be productive.
Paul Spain:
Yeah, that’s really good. And I mean, how often are we seeing this sort of insecure code generated by AI?
Stephen Phillips:
So a couple of the university studies done earlier this year, it’s changing all the time, but some of the most sort of cited ones, around 37% to 40% of the code being generated by AI large language models, was actually insecure. So it’s quite significant.
Paul Spain:
That’s an insane number. So over 30% of AI generated code cannot be trusted to be safe and secure.
Stephen Phillips:
Yeah, but it’s about the same basically as hiring the first time. Basically, a new developer actually goes to write something, they try and figure out where to actually do that, and they go and look at stack overflow, they look at Reddit and they cut and paste stuff, and gosh, it works. But that’s actually where the large language models learn from. So statistically, they’re pulling a lot of their learning from the most frequent places where code basically gets talked about and that becomes part of the learning. So you can’t expect large language models to actually sort of do that until. But over time, that is going to iterate. So that the quality of what you train your model on should be on secure code.
Paul Spain:
Yes. Yeah. We need to be training on the very best, don’t we? And if you train an AI with secure practices and secure code, then we’re going to see good results. But that’s not quite here yet.
Stephen Phillips:
Correct. So over time, you’ll start seeing basically those gen code coding ones being trained basically on specific data sets, but you need to train them on a big enough data set, basically, that it works.
Paul Spain:
Okay. Well, that’s a great tip. Thank you. Thank you, Stephen. If anyone is interested in sneak and your tools, where do they go? Who do they get in touch with?
Stephen Phillips:
Yeah, probably go to the website Snyk.io. So Now You Know what it means. So Synk.io is probably the best place. Or look me up. Stephen Phillips on LinkedIn.
Paul Spain:
Excellent, excellent. That’s good. Well, thanks again, everyone, for joining us. Great to delve through into the news and have some great opinions, as always, Stephen. So thanks for your opinions and insights today. And of course, a big thank you to our show partners to Gorilla Technology, HP, Spark, 2degrees and One NZ. Thanks, everyone for watching those on the live stream, of course, would encourage you to jump onto whatever your favorite podcast app is. If you don’t have one, Apple Podcasts or Spotify and follow NZ Tech Podcast there.
Paul Spain:
And if you’re listening to the audio, of course, worth looking out to follow us. Cross on YouTube X or Facebook under NZ Tech Podcast or for the live stream on LinkedIn. You can just follow myself, Paul Spain. All right, thanks, everyone. We’ll catch you again on the next episode next week. See ya.