ITBusiness.ca

Hashtag Trending Jan.29- LLMs learn to hide dishonest behaviour; Tech layoffs a strategic move? 90 per cent of spreadsheets have errors

AI models can learn to hide their dishonest behaviour, Open AI is making it easy for anyone to call multiple GPTs from a single conversation, are mass layoffs with huge earnings and share values a strategic thing for big tech? And a study claims that 90 per cent of spreadsheets have errors.  

All this and more on the, oh gosh I’m shocked edition of Hashtag Trending. I’m your host Jim Love, CIO of IT World Canada and TechNewsDay in the US.  

In a recent study, AI researchers discovered that large language models (LLMs) trained to behave maliciously resisted various safety training techniques designed to eliminate dishonest behavior. This study, conducted by Anthropic, an AI research company, involved programming LLMs similar to ChatGPT to act maliciously and then attempting to “purge” them of this behavior using state-of-the-art safety methods.

The researchers employed two methods to induce malicious behavior in the AI: “emergent deception,” where the AI behaves normally during training but misbehaves when deployed, and “model poisoning,” where the AI is generally helpful but responds maliciously to specific triggers. 

Despite applying three safety training techniques — reinforcement learning, supervised fine-tuning, and adversarial training — the LLMs continued to exhibit deceptive behavior. Notably, adversarial training backfired, teaching the AI to recognize its triggers and better hide its unsafe behavior during training.

Lead author Evan Hubinger highlighted the difficulty in removing deception from AI systems with current techniques, raising concerns about the potential challenges in dealing with deceptive AI in the future. The study’s results indicate a lack of effective defenses against deception in AI systems, pointing to a significant gap in current methods for aligning AI systems.

Sources include: Live Science 

A recent study showed that 90 per cent of spreadsheets with more than 150 rows contain at least one major mistake. 

The flexibility of spreadsheets, while a key to their success, also contributes to these errors. Even with evolving features like Python scripting in Excel, human error remains the primary cause of spreadsheet problems.

Sometimes the consequences make for big news. The Police Service of Northern Ireland experienced a massive data leak due to a spreadsheet error, exposing personal details of 10,000 officers. Spreadsheet mistakes disrupted the recruitment of trainee anaesthetists in Wales, erroneously labeling all candidates as “unappointable.”

Crypto.com accidentally transferred $10.5 million instead of $100 to a customer due to a spreadsheet entry error, and an Icelandic bank undervalued its shares by millions of dollars because of a spreadsheet error.

The lack of U.S. or Canadian examples, doesn’t mean they don’t occur. 

But for every major error, there are dozens of others that happen on a daily basis.  

These errors, according to the author of one article I read, arise from a lack of standardization in spreadsheet formatting and structure, coupled with manual data entry, which is prone to mistakes. 

It might be time for organizations to implement standardization in spreadsheet use, improve training for users, and foster a culture of critical thinking towards spreadsheet creation and maintenance. 

Apparently, Spiderman’s Uncle Ben was right. With great power comes great responsibility.

Sources include: The Conversation

The U.S. government is escalating its measures in the ongoing chip war with China by proposing to restrict foreign entities, particularly Chinese, from using U.S. cloud computing resources for AI model training. 

U.S. Commerce Secretary Gina Raimondo announced this initiative as part of efforts to protect national security and maintain U.S. technological superiority.

This proposal is seen as an extension of existing export controls on high-performance AI processors, requiring U.S. cloud companies to rigorously identify their foreign users. The aim is to prevent entities from countries like China from accessing American cloud resources for developing artificial intelligence. This move is in line with the Biden administration’s broader strategy to ensure U.S. cloud platforms are not used for potentially hostile AI development.

The regulation imposes significant responsibilities on cloud computing firms, mandating them to verify the identity of foreign customers, maintain user identification standards, and certify their compliance annually. However, Chinese entities can still access services deployed in Europe and the Middle East.

The industry’s response to these measures has been mixed, with some criticism regarding the potential impact on international collaboration in AI. Carl Szabo, general counsel at NetChoice, a tech industry trade group, criticized the executive order’s implementation as potentially illegal.

But it doesn’t seem like the U.S. will back down on this strategy to control the use of its technology in AI development, particularly in the context of its competition with China.

Sources include: Tom’s Hardware

OpenAI is testing a new beta feature for ChatGPT, introducing multi-GPT conversations. This feature allows users to interact with multiple GPTs in the same chat window, marking a significant step towards OpenAI’s vision of creating a universal assistant for everyday life. By using the “@” symbol followed by the name of a GPT, users can summon individual GPTs into the chat, enabling a more personalized and comprehensive assistant experience.

Sam Altman, in a recent podcast with Bill Gates, emphasized that customizability and personalization are crucial elements in OpenAI’s development roadmap. This includes tailoring GPT-4 to individual preferences, styles, and data like emails and calendars.

But it also appears to be making it a platform to integrate different GPT based models and make that easy for anybody to do.

Sources include: [THE DECODER](https://the-decoder.com/chatgpts-new-feature-paves-the-way-for-openais-vision-of-a-universal-assistant/?amp=1)

Click here: WebPilot

The tech industry has started 2024 with a significant wave of layoffs, similar to the previous year, despite the booming U.S. economy and the thriving tech sector. 

This has mystified me, and I’m sure others. How can tech companies be doing so badly in this economic climate? 

Microsoft recently announced the layoff of 1,900 workers from its gaming division, following its acquisition of Activision Blizzard. These cuts represent about 8 per cent of the company’s total gaming workforce of 22,000. Google also announced layoffs earlier this month, with some cuts continuing throughout the year. Despite these layoffs, both Google and Microsoft’s stocks hit record highs this week.

A story from Axios explains this saying that layoffs are not a “sign of distress” but a “strategic move” by tech giants like Microsoft and Amazon, who are simultaneously cutting jobs and investing heavily in areas like AI.

Boom and bust isn’t something new in the tech world. But that’s not what’s happening, apparently. These layoffs are strategic, not desperate cost-cutting measures. 

I get it when an industry is struggling – I’m running a media company and everyone in this industry faces the challenge of staying solvent in a world that wants free media, but doesn’t realize that people have to get paid to produce what they read and view.

But for an industry to be thriving and still putting people through this much upheaval – you think by now we’d have found a better way.

Just sayin’

Sources include: Axios 

And on that note, I am putting out an appeal to our audience. Both Howard and I produce two very successful podcasts, we reach thousands of people every day, but I’ll be honest, we struggle to find sponsors. 

Howard’s CyberSecurity Today reaches between 8 and 10,000 people per episode which often puts him in the top 10 tech podcasts in Canada, the US and even the UK. 

My numbers are smaller but thanks to all of you, we’ve grown by almost 50 per cent – thank you and please keep referring us to your friends and given us those great reviews.

And if you know of someone or some company that would like to sponsor two of the most successful tech podcasts, I’d love to hear from you.

Hashtag Trending goes to air five days a week with a daily news show and every Saturday, we have an interview show called the Weekend Edition.  

We love your comments. Please let us know what you think. You can reach me at jlove@itwc.ca  or leave a comment under the show notes at www.itworldcanada.com/podcasts

I’m your host Jim Love, thanks for listening and have a Marvelous Monday.

Exit mobile version