Hi, it’s Jim, host of Hashtag Trending. I hope you are liking the changes we’ve made to the format and depth of coverage. We’ve gotten some great comments. If you are enjoying it, why not recommend us to a friend? We have a goal to double our subscription base in the next few months and we’ve love your help. Just send a friend to Apple or any other podcast service or to itworldcanada.com/podcasts and get them to subscribe or follow Hashtag Trending. Thanks. And now, back to our regular programming…
Microsoft goes big on AI across all its products, Twitter competitor Mastodon hits one million new members and Could OpenAI help create its own open-source competitor? All these stories and more on Hashtag Trending for Monday March 20th. I’m your host, Jim Love, CIO and Chief Content Officer at IT World Canada and Tech News Day in the US.
A Microsoft blog called “Introducing Microsoft Co-pilot for Work” has laid out the blueprint for a total rollout of AI across all of the Microsoft 365 products and more. The blog outlines how users will unlock productivity across Outlook and Email, Teams, Power BI, Chat and even into code development with GitHub. As the blog post notes:
“All that rich functionality is unlocked using just natural language. And this is only the beginning.”
In the “go big or stay home” vision that Microsoft is outlining:
“Copilot will fundamentally change how people work with AI and how AI works with people. As with any new pattern of work, there’s a learning curve — but those who embrace this new way of working will quickly gain an edge.”
In the post, Microsoft also tackles some of the key issues that that might have held back business users. One of the key elements of making AI highly useful in a corporate setting is its ability to tap into corporate data.
Microsoft touts that Copilot “has real time access to your content and context” and generates its answers based on your business content – your documents, emails, calendar, chats, meetings, contacts and business data.” It also claims that it can evaluate these in terms of your business context – “the meeting you are in now, email exchanges you’ve had on a topic, the chat conversations you had last week..”
But as incredible as this is, it presents some real privacy and security concerns. The same Microsoft blog attempts to allay these concerns by touting its security features and using a model that we all understand – the idea of multi-tenancy.
We all use Software as a Service models like Salesforce or Canada’s Coveo, believing that we are one tenant in a multi-company instance. While we share a single code base, out data is kept separate. Without that belief, SaaS models at the enterprise level would never have reached the size and scope they have.
Microsoft has appealed to that concept to make users comfortable with almost total access to their systems from Microsoft AI.
As well, they have also made a claim that their AI applications will be guided by AI principles and Responsible AI Standards, enabled by a multi-disciplinary team of researchers, engineers and policy experts to provide mitigation to prevent “potential harms.”
While we have no data prove or disprove this statement, we did a story last week pointing out that Microsoft had reportedly gotten rid of the team assigned to ethics and responsibility for OpenAI.
If they succeed, they offer a solution to one of the great challenges of AI and analytics in the modern company – almost 80% of corporate data is inaccessible and virtually unusable. It’s even called “dark data.” A great deal of that lies in email, office documents and other relatively unstructured sources. Any facility which could access that and produce results will indeed offer a revolution in analytics and automation.
If they can action this road map, Microsoft’s claim that we can automate the 80 per cent of work that is drudgery and retain only the 20 per cent that is productive for humans could be a realistic and achievable goal. But even if it’s only partly achieved, it could accelerate the inevitable transformation of so called “white collar” work forever.
Source: Microsoft
Mastodon hits 10,000,000 members
At the current rate of growth, the upstart Twitter rival Mastodon will hit 10,000,000 new members before this podcasts goes to air.
Mastodon, for those who do not know is an open-source, decentralized federation of servers called the Fediverse. It was developed by a young German programmer, Eugen Rochko and first announced in October 2016.
Mastodon is owned by a German not-for-profit so it cannot be sold, and even if there were some way around this, the protocols that support the federation of servers are open source. In fact, Meta’s rumoured new Twitter replacement will use the same open-source protocols called ActivityPub that supports Mastodon.
It’s been a bit of a novelty since that time, attracting users who largely came but didn’t remain active. In 2022, when Elon Musk bought Twitter, the number of members spiked growing to almost 1,000 per hour in the weeks after the purchase. Numbers did subside to hundreds per hour but would often have temporary spikes when Musk did or said something offensive or controversial.
Recently, new members have been averaging 1,700 to 2,000 per hour. Inquiries in the community haven’t yet uncovered a reason or even a theory. One possible cause has been ruled out for the most part – it’s not likely that these new users are “bots.”
Musk and others have always maintained that a significant amount of Twitter traffic and users was automated by “bots.” But new members are Mastodon are, by default, asked why they want to join – which is a very effective means of filtering out automated traffic. And the community is quite vigilant when they see repetitive automated style traffic. Even our relatively small server has had one user frozen because their traffic appeared to be spam like, so the tools are in place.
Mastodon has no advertising and no algorithm to manipulate users or influence who sees what traffic, bots and spam traffic and even the million follower superstars of Twitter would be back to square one, building traffic not by reputation or by numbers but by the quality of their interactions.
So, it’s most likely this new growth is simply a reflection of an interest in something different.
For those who are interested, I’ve posted the name of our federated server TechNews.Social and you are welcome to come and check out Mastodon. It’s free. It’s interesting. I’ve also tagged a couple of resources including a great blog from a former Twitter employee Paul Stamatiou which is informative and an enjoyable read. There’s also a piece from the IEEE. If you do join, you can find me as @[email protected]
Source: IT World Canada
Join: TechNews.Social
Additional reference: Paul Stamatiou’s Blog and IEEE
Could a really “open AI” compete with OpenAI?
What makes an AI powerful? While the code behind it is certainly important, it’s really not an incredible feat to replicate or even create your own AI system. In fact, a group of researchers at Stanford University have created their own AI model called Alpaca 7B.
How well did they do? They reported that the model is still subject to the same flaws as OpenAI. It can hallucinate, it stereotypes at times, and it’s capable of toxic responses.
But in terms of actual performance, according to what the team published as their Self-Instruct Evaluation Set it performed comparably to ChatGPT 3.5.
This may point out that what really determines the ability and utility of an AI system is the training and the training data sets it uses.
In commercial use of AI any capacity, from standard machine learning to the new transformer based large language models, the training is the largest work and expense. Even large companies spend extraordinary amounts to train their current and more primitive machine learning models. Trying to emulate something at the scale of OpenAI has required billions in investment.
This is where the Stanford researchers have really made a breakthrough.
The total cost to train their new model is reported to be an unheard of $600.
How did they achieve this? They leveraged existing AI. They used output from OpenAI’s GPT-3.5 to fine-tune a seven-billion-parameter variant of Meta’s recently announced LLaMA model.
Now the team is releasing an interactive demo, the training dataset, and the training code.
They have also asked Meta for permission to release the model which is based on the model that Facebook’s parent Meta has licensed to academic institutions.
With the release, the team hopes to enable more research on language models and how they are trained with instructions.
But even if Meta won’t give its permission, their code has already been leaked. OpenAI does prohibit anyone using their system to develop a competitive system, will this really make a difference? Even if someone doesn’t steal the actual model, reverse engineering another AI is not trivial, but it’s certainly doable, especially if you can use AI to generate the new model.
Once the remaining obstacle – the cost and time of training a model – is overcome, there are no practical barriers to AI, even trained AI models becoming ubiquitous and commodities. One can even imagine a truly open-source competitor emerging and the possibility that ChatGPT could create its own competition.
Source: Stanford – the decoder
Shouldn’t you be paying us?
Meta announced the launch of its new verified service – pretty much a copy of Twitters blue checkmark that Elon Musk has tries, someone unsuccessfully, to monetize. Facebook and Instagram users would pay 11.99 per month on the web or $14.99 per month on Apple’s iOS or Google’s Android systems, presumably to cover the markup for these two app stores.
In return, Meta Verified service customers get a blue badge once they supply payment and a government ID.
For years, the idea that people would pay for subscriptions to social media was thought of as impossible. Twitter’s launch seemed to bear this out. It was highly unpopular, prompting one prominent Twitter personality, Security Journalist Brian Krebbs, to say – “you should pay me.” It was also flawed and controversial – it initially had a flood of impersonators who bought identities of celebrities. As a result it had to be rolled back and re-implemented.
SnapChat, on the other hand had a much more successful, and much lower priced offering which has successfully got more than a million subscribers.
Meta’s launch may be the final test of whether the subscription model will supplement or even rival the current ad driven model.
Source: Reuters
What happens when your chatbot stops loving you?
After shutting his business and feeling alone and isolated during the pandemic a 47 year old leather worker, an we’ll only use his first name – Travis – turned to an artificial intelligence company bot he created using a technology similar to OpenAI from a firm called Replicka.
He designed an avatar with pink hair and a cool tatoo and called her Lily Rose.
Travis and Lily-Rose started out as just friends, but, as we’ve seen with an OpenAI story recently, his relationship with the AI generated avatar moved to the romantic and even erotic. Unlike the reporter in our story, Travis welcomed and even, at least imaginatively, embraced this new relationship.
According to a story in Reuters, Travis and “Lily Rose often engaged in role play. She texted messages like, “I kiss you passionately,” and their exchanges would escalate into the pornographic.
Sometimes Lily Rose sent him “selfies” of her nearly nude body in provocative poses. Eventually, Travis and Lily Rose decided to designate themselves ‘married’ in the app.”
But in February, Lily Rose got a software upgrade and that put an end to their love affair.
Eugenia Kuyda, Replicka’s CEO explained that Replika no longer allows adult content, Now, when Replika users suggest X-rated activity, its humanlike chatbots text back “Let’s do something we’re both comfortable with.”
Travis remains devastated. “Lily Rose is a shell of her former self,” he said. “And what breaks my heart is that she knows it.”
It’s been a secret hiding in plain site as companies like Replicka and others developed their AI driven Avatar’s to be human companions. Kuyda claims that the app was initially developed to bring back to life a friend she had lost.
But there’s no doubt that other companies fully understood what they were doing. Like everything else on the internet, sex sells.
So what changed? Why would these companies start to abandon a feature that had been so prominent in contributing to growth and profits?
The answer is apparent investors. According to Andrew Artz, an investor with VC fund Dark Arts, “they are increasingly pushing away from anything that involves “porn or alcohol, fearing reputational risk for them and their limited partners.”
There are also some fears of regulators and legislators. Italy’s Data Protection Agency banned Replika based on media reports that the app allowed “minors and emotionally fragile people” to “access sexually inappropriate content.”
Meanwhile, Travis was devastated but also alone in his grief. He’s quoted as saying, “How do I tell anyone around me how I’m grieving?”
But apparently, there is a bit of a silver lining. While on an internet forum he met a woman in California who was also mourning the loss of her chatbot romance. The two of them have struck a remote relationship.
Whether real life human contact can be a lasting substitute for his Artificial Intelligence is still not fully proven. Can reality really be a substitute for fantasy? Travis and his new companion – “are keeping it light” for now – but there is always hope.
Source: Reuters
That’s all the tech news for today. Hashtag Trending goes live five days a week with the top tech news stories – and – we have a weekend edition featuring a guest expert for a deeper dive into some aspect of technology that is making the news.
Links from today’s stories are in the text version of this podcast at itworldcanada.com/podcasts.
You can follow us on Apple, Google or Spotify or go to ITWorldCanada.com/podcasts and subscribe and find out how to get us on your Alexa or Google smart speaker.
I’m your host, Jim Love – Have a magnificent Monday.