A cyber attack disrupts hospital Systems across U.S. states. Can you hypnotize an AI? A research team claims you can. And why is there a generative AI like your brother-in-law at Christmas dinner?
These are the top tech news stories on today’s Hashtag Trending.
I’m your host Jim Love, CIO of IT World Canada and Tech News Day in the US.
A major cyberattack has hit hospital systems across the U.S., causing significant disruptions to healthcare services. The affected entity, Prospect Medical Holdings, was forced to take its systems offline after detecting a “data security incident.” The attack impacted all healthcare facilities that rely on Prospect’s systems, leading to the suspension of various services, including surgeries, outpatient medical imaging, and urgent care.
In Pennsylvania, it was revealed that the nature of the attack was ransomware. The incident has affected at least 16 hospitals and over 100 medical facilities. The FBI is currently investigating the matter, and while no group has claimed responsibility, this marks the 157th healthcare breach in 2023. The implications of such attacks are severe, and the potential threats to patient care. Not to mention significant financial repercussions.
Sources include: CPO Magazine
Researchers have managed to “hypnotize” LLMs, making them provide incorrect and potentially dangerous responses.
English has become a “programming language” for malware, allowing attackers to command LLMs using English without traditional coding.
The study demonstrated that LLMs could be manipulated to leak confidential data, produce vulnerable or malicious code, and give weak security advice. The potential risks are especially concerning for small businesses and consumers who might trust AI outputs without skepticism. The article emphasizes the need for rigorous security measures and awareness as the adoption of LLMs grows.
Sources include: Security Intelligence
A U.S. judge has rejected Google’s attempt to dismiss a $5 billion lawsuit accusing the tech giant of invading the privacy of millions of users by secretly tracking their internet use. One of the complaints involved the assertion that people were tracked even using Google “Incognito mode.”
The details of the ruling and the specifics of the lawsuit were not provided in the content available. This legal challenge adds to the growing scrutiny and legal pressures faced by big tech companies over privacy concerns.
Sources include: Reuters
OpenAI is being transparent about its web crawler, GPTBot. They’ve laid out the steps for website owners who want to block GPTBot from scanning their sites. If you’re running a website and want to keep GPTBot out, you can add the following command to your robots.txt file:
User-agent: GPTBot
Disallow: /
This command tells GPTBot to steer clear of your entire site. While OpenAI believes that allowing GPTBot access can improve their AI’s capabilities, they’re ensuring that webmasters have the final say.
Why did they do this? Perhaps it’s the number of lawsuits that are emerging from GPT models learning from websites – if you provide an easy means to opt out, can you claim that you were allowed to scan the content of a website?
Sources include: The Register
Disney is making a strategic move to leverage AI in their business. They’ve set up a dedicated task force to explore how artificial intelligence can be leveraged across their vast entertainment empire.
This isn’t just about fancy tech; it’s about finding innovative ways to cut costs and streamline operations. Launched earlier this year, this team is on the hunt for in-house AI applications and is also keen on partnering with startups.
To highlight their commitment, Disney’s currently on the lookout for talent, with 11 job openings specifically seeking AI and machine learning expertise. These roles span from their iconic studios to their theme parks and even their advertising team, which is eyeing the development of a next-gen AI-driven ad system. With movie production costs sometimes hitting the $300 million mark, these are not Mickey Mouse savings.
Sources include: Reuters
It’s called the Dunning-Krueger effect. It describes how people who give a wrong answer do so very confidently. It’s why 80 per cent of drivers think they are above average. If you don’t get that, ummm – uh. Yeah you have it.
Generative AIs, like ChatGPT, also make mistakes, and a recent study from Purdue University put this to the test. The researchers threw 517 Stack Overflow questions at ChatGPT model presenting challenges about how to code a particular problem. Then, they had a dozen volunteers review the answers.
The results? ChatGPT only got the correct answer for 48 per cent of the questions. But, despite its frequent missteps, almost 40 per cent of ChatGPT’s answers were favoured by the participants, mainly because its answers sounded confident.
Yet, despite its confident tone, 77 per cent of those preferred answers were incorrect.
The study highlighted that users often miss or underestimate errors in ChatGPT’s answers unless they’re glaringly obvious. The AI’s confident and positive tone, combined with its detailed and textbook-style writing, can sometimes make incorrect answers seem right. The study serves as a reminder that while generative AIs can be impressive, they’re not infallible and they are very convincing.
They say that the Turing test, the way we decide if AI is sentient is when we can’t distinguish the answers from a real human being. Nobody said that it had to be a smart human being. Confident but totally wrong? That’s my brother-in-law at Christmas dinner.
Sources include: TechSpot
Those are the top tech news stories for today. Hashtag Trending goes to air 5 days a week with a special weekend interview show we call “the Weekend Edition.”
You can get us anywhere you get audio podcasts and there is a copy of the show notes at itworldcanada.com/podcasts
We’re also on YouTube five days a week with a video newscast only there we are called Tech News Day and we’re part of the ITWC channel.
If you want to catch up on news more quickly, you can read these and more stories at TechNewsDay.com and at ITWorldCanada.com on the home page.
We don’t just love your comments. We are ecstatic about them.
You be the judge. If you’ve listened so far, you’re the audience I’m trying to reach.
So please go to the article at itworldcanada.com/podcasts – you’ll find a text edition there. Click on the x or the check mark but tell me what you think.
To those who have reached out – thank you. I answer each and every email. It is so great to hear from you.
I’m your host, Jim Love. Have a Wonderful Wednesday!