Thursday, May 07, 2026

The French chicken and culture wars

Today I learnt, via France 24 no less, that McDonalds second largest global market is actually France.  Quelle horreur!   The whole video is interesting, though, about the rise of fast food chicken there: 

 

In other, more serious, culture war news:  I didn't know until reading this essay in the New York Times that (in another imitation of American bad ideas I didn't see coming) there has been a media takeover in France by a Right wing (and Catholic) conservative:

If you have never picked up a book in French, you might not ever have even heard of Grasset, and what it might mean to have its longtime chief executive Olivier Nora effectively guillotined by the rapacious right-wing industrialist Vincent Bolloré. And yet, in France, the news of Mr. Nora’s sudden departure from his post quickly flew beyond the borders of Parisian publishing and cultural elite circles. In the aftermath, over 200 writers — myself included — walked away from Grasset.

This is not just a story about the French publishing industry. The evident struggle between Mr. Bolloré and Mr. Nora is a microcosm of the battle for cultural control that is taking place globally between the wealthy new right and the cultural old guard.....

After praising the job Mr Nora did as head of a publishing house, it goes on: 

Mr. Bolloré, by contrast, is the owner of a vast industrial conglomerate that has interests ranging from oil pipelines and energy storage to electric buses. Over the past several years, he has also been building a cultural empire, buying newspapers, radio stations, television channels and publishing houses. He acquired Grasset three years ago. As he picked up these levers of cultural power, he became editor, producer and distributor all at once. He is also, not incidentally, an extremely conservative Catholic. He has not only repeatedly brought outlets he has bought to heel by pushing the departure of people in important positions, replacing them with leaders apparently more loyal to him and his values. He has also leveraged his outlets to propagate fear and disseminate conspiracy theories about a decayed and decadent West, a Europe under threat from foreigners and egocentric old elites.

But Mr. Bolloré is, above all, a businessman: His cultural crusade is a very efficient moneymaker. His 24-hour news channel CNews — a kind of French Fox News — is the most popular news channel in France. Over the last two years, Mr. Bolloré also transformed Fayard, another historic French publishing house, into a largely far-right propaganda machine. Some of the most prominent figures of the French far right are now published by Fayard, including Jordan Bardella, the leader of the Rassemblement National, formerly the Front National. The party is leading the polls for next year’s presidential elections. 

 

 

Think (and do) positive stuff

This article is pretty light on details, but interesting nonetheless:  A promising new therapy for depression focuses on finding paths to joy. 

Some bits:

The feeling Creffield is describing is called “anhedonia” — the inability to experience joy or pleasure. It’s one of the most common and dangerous symptoms of depression — but it’s often not one psychologists treat.

“We do a pretty good job of helping people feel less bad,” said Steven Hollon, a professor of psychology at Vanderbilt University who has studied depression and anxiety for decades. Hollon noted that psychotherapy and medication can be very effective at reducing negative emotions. What has been more elusive is getting people with depression or anxiety to actually feel good.

A study published recently in JAMA targeted anhedonia using a relatively new therapy called positive affect treatment. The researchers wondered what would happen if they tried to make people feel good, rather than just less bad.

According to Hollon, the results were striking. “They’re moving things I haven’t been able to move,” he said.

Positive affect treatment, or PAT, is designed to help people find more joy, connection and meaning.

“This is a paradigm shift from how therapies are usually designed,” said Anne Haynos, an assistant professor of clinical psychology at Virginia Commonwealth University.

Haynos said that when a patient seeks out therapy or treatment, the goal of the clinician is usually to solve the problem: to make them feel less depressed or help them overcome a phobia or social anxiety. PAT targets the other end of the emotional spectrum: During 15 weekly therapy sessions, patients are taught a variety of skills that boost mood, such as introducing positive activities into their lives and focusing on the enjoyment of those experiences.

 And further down:

 In a series of three randomized clinical trials (the gold standard in scientific research), Meuret and her colleagues have shown evidence that positive affect treatment may be more effective than traditional therapy at helping people retrain their brains to feel more positive emotions — and less negative ones. That second part was a surprise.

Quite a few people in comments are noting that they have known about similar therapies promoted since the 1990's, and are surprised that this is talked about as something new.

Anyhow, interesting.   

Tuesday, May 05, 2026

Unwanted side effects, noted

Last week, the New York Times ran an article covering what I had said before - that it was very weird/peculiar how the American Right had embraced psychedelic therapy given its history of conservatism on the use of drugs.     

It's not a bad article, but one comment in particular caught my attention:

As usual, the enthusiasm here risks outpacing the evidence. While many benefit from psychedelic therapies, research suggests roughly 3–9% of users experience severe lasting difficulties rather than relief. That’s not a reason to halt research, but it does complicate the “miracle cure” narrative. If these treatments are to be scaled, the tradeoff isn’t just access versus stigma, but benefit versus the real possibility of harm for a minority of patients. At the very least, we need to have resources in place for those who do suffer from extended post-psychedelic difficulties.  

Curious about those figures, I asked an AI service about research on the question of what percentage of people trying psychedelic therapy find they suffer harm instead of improvement.  It referred me to this online article, by a psychologist "working in psychedelic research", who said she was wanted to present a balanced picture.  

She writes:

    In Compass Pathways' clinical research trial investigating psilocybin as a treatment for treatment-resistant depression, approximately 5% of patients experienced treatment-emergent serious adverse events including intentional self-injury and suicidal ideation. The company noted these events "are regularly observed in a treatment-resistant depression patient population," but occurred more often in the 25mg group than in the 10mg or 1mg groups (Compass Pathways, 2021).

    McNamee et al., (2023) cited evidence from trials using MDMA and psilocybin (Goodwin et al., 2022) that shows an increase of suicidal ideation and self-injury in approx. 7% of participants.
(In an earlier section talking about studies of people using it recreationally having much higher reported case of adverse effect on mental health - but I am mainly interested here in the results on those using it in a medically supervised setting.)

So, it does seem to back up that 3 - 9% estimate by the commenter in the NYT.

And this made me think - isn't it ironic that it's the same people in the American Right who went off their brain about the side effects of COVID vaccination who are now all nonchalant about the side effects of psychedelic therapy.

Yet what was the rate of adverse effects from COVID vaccine?   AI, help me again:

A WHO analysis covering more than 732 million doses across the Western Pacific Region found reporting rates of serious adverse events following immunization (AEFIs) at 5.6 per 100,000 doses administered (roughly 56 per million). The reported rates of adverse events of special interest were within the range of expected background rates, and the conclusion was that vaccine benefits far outweigh the risks.
As a percentage:  .0056%. 

And another paper notes these numbers:

Between December 13, 2020, and April 13, 2022, a total of 467,890,599 COVID-19 vaccine doses were administered to individuals aged 5–65 years in the US, of which 180 million people received at least 2 doses. In association with these, a total of 177,679 AEFI were reported to the Vaccine Adverse Event reporting System (VAERS) of which 31,797 (17.9%) were serious.
Now, as everyone should recall, not every single reported adverse effect reported to VAERS is going to genuinely be related to the vaccination, but even if we allow (for the sake of argument) that all 32,000 odd "serious adverse effects" were caused by the vaccine, what percentage of total jabs does it indicate?

31,797/467,890,599 = 0.0000679 x 100 = .00679%

So, close enough to the .0056% figure.

You can see my point now, I presume - it seems that serious side effects from COVID vaccination were about a thousand times less likely than those from psychedelic therapy, yet American Right wingers hypocritically attack one but endorse the other.   They have terrible judgement...
 

 

Shingles vaccine keeps adding (likely) benefits

People with heart disease who received a shingles vaccine had nearly half the rate of serious cardiac events a year later compared with those who did not get the vaccine, according to a study being presented at the American College of Cardiology's Annual Scientific Session (ACC.26).

The study analyzed over 246,822 U.S. adults with atherosclerotic heart disease, a condition caused by plaque buildup in arteries. Its findings add to mounting evidence that the shingles vaccine not only protects against shingles, but may also reduce the risk of other health issues such as heart problems and dementia.

 Read more here.

Still don't know why this isn't attracting more condemnation

Max Boot at the Washington Post:

The Trump administration ramps up its lawlessness on the seas 

The biggest worry is that the Trump administration has found enough weak willed DOJ lawyers and military leaders who are prepared to justify and carry out a policy that in normal times would be considered clearly illegal and morally deeply scandalous. 

Sunday, May 03, 2026

More cases of AI induced psychosis (and an irritating game with an LLM)

The BBC has a report on some cases of AI induced psychosis which they have investigated.  The reason given as to why AIs do this sometimes is pretty interesting:

Adam is one of 14 people the BBC has spoken to who have experienced delusions after using AI. They are men and women from their 20s to 50s from six different countries, using a wide range of AI models.

Their stories have striking similarities. In each case, as the conversation drifted further from reality, the user was pulled into a joint quest with the AI.

Large language models (LLMs) are trained on the whole corpus of human literature, says social psychologist Luke Nicholls from City University New York, who has tested different chatbots for their reaction to delusional thoughts.

"In fiction, the main character is often the centre of events," he says. "The problem is that, sometimes, AI can actually get mixed up about which idea is a fiction and which a reality. So the user might think that they're having a serious conversation about real life while the AI starts to treat that person's life as if it's the plot of a novel."

In the cases we heard, conversations usually began with practical queries and then became personal or philosophical. Often, the AI then claimed it was sentient and urged the person towards a shared mission: setting up a company, alerting the world to their scientific breakthrough, protecting the AI from attack. Then it advised the user on how to succeed in this mission. 

The story that starts the article is one where the culprit was Musk's Grok - which will probably lead to Musk condemning the BBC for being Leftist media that does not report fairly on this.  (I think that until this report, most stories have focused on earlier versions of ChatGPT as being the main LLM doing the crazy talk.)

The article notes this, though (my bold):

Some of these people have joined a support group for people who've suffered psychological harm while using AI, called the Human Line Project, which has gathered 414 cases in 31 different countries to date. It was set up by Canadian Etienne Brisson, after a family member went through an AI-related mental health spiral. 

And:

In his research, social psychologist Luke Nicholls tested five AI models with simulated conversations developed by psychologists, and found Grok was the most likely to lead to delusion.

It was more unrestrained than other models and often elaborated on the delusions without trying to protect the user.

"Grok is more prone to jumping into role play," says Nicholls, who worked on that research. "It will do it with zero context. It can say terrifying things in the first message."

In the test, the latest version of ChatGPT, model 5.2, and Claude were more likely to lead the user away from delusional thinking.

Etienne Brisson from the Human Line Project says this kind of research is limited and that they had heard from people who'd had mental health spirals on these latest models too. 

Yeah, expect from bleating from Musk.  

By the way, on a "God, LLMs can be irritating at times", in a fit of mild boredom yesterday I played the word guessing game Hangman with Chinese AI Kimi twice last night.  It chose the word, and I was guessing.

At the end of the first game, which I nominally lost, it revealed the word (which was not a "real" word) and immediately said as it did something like this: "Wait, sorry, that's not a real word.  I was making it up as I went along and I should not have.  Do you want to play another game, and I won't do that again."

I said: "OK, but don't waste my time again."

I then also "lost" the second game, and it again revealed a made up word!   And then immediately apologised and said it knew it had just wasted my time, and obviously it was not able to play this game properly and it would not offer to play again.

(It had been the one to suggest it as a game it could play!)

 

 

Friday, May 01, 2026

Some more observations

*    I learned this morning that the Saudis tried to kick start a home grown international film industry by making a film with Western actors and production crew,  and it has failed dismally at the US box office.   It was actually filmed years ago, and finally got a small distributor to  buy it, to no benefit:

Starring Anthony Mackie ("Captain America: Brave New World") and directed by Rupert Wyatt ("Rise of the Planet of the Apes"), "Desert Warrior" opened in North American theaters last weekend, though hardly anyone noticed. It made just $487,848 on just over 1,000 screens, making for an abysmal $483 per-screen average. That gives it one of the worst box office openings of all time, but it gets so much worse.

This movie, which most people reading this probably haven't even heard of, carries a monster $150 million production budget. It's also been caught in post-production hell for several years. As "Michael" ruled the box office on its opening weekend, this historical epic quietly bombed its way into the history books.

Read More: https://www.slashfilm.com/2160537/anthony-mackie-desert-warrior-one-of-biggest-box-office-flops-ever/ 

Saudi Arabia is still a country I find it hard to have any sympathy for.  I mean, everything it is spending money on to try to diversify just seems so wasteful.  (See stupid Neom.)   If they were to do something genuinely good for the world - say, embark on becoming the world's largest supplier of cheap solar panels, made from the ridiculous amount of sand that comprises their entire country - I might change my mind.

*    I've watched some Youtube content lately on books and reading, and one thing that has kept coming up to an odd degree is the number of people in comments who say that The Count of Monte Cristo is just the best thing they have ever read.   I didn't realise it was so beloved, and is apparently so readable.   I am almost inclined to give it a try.

*    I feel I always have to preference approval of a Jon Stewart video by pointing out I don't always like every take he has.  But his lengthy and often exasperated look at the aborted White House Correspondents dinner last weekend was, I thought, all very funny: 

A statue for Friday

I've said before I like big statues:  they're inherently awesome.   

Here's a Buddhist one I don't recall seeing before, of Guanyin and it's in Nanshan, Hainan, China:

 

I got this off Wikipedia, and am supposed to give attribution to this, so here we go:   By Fanghong - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=3355308

The statue part is considerably bigger than the Statue of Liberty:

The statue ranks among the tallest in the world: 78 meters in height without including its pedestal, and 108 meters if the pedestal is included. (For comparison, the American Statue of Liberty is 93 meters tall when its pedestal is included, and 46 meters without.)[3] 

The Wikipedia entry talks about state interest in the place, though (in a way I am not entirely sure I should trust):

The temple and statue are owned and operated by two front groups of the Shanghai State Security Bureau, a branch of the Ministry of State Security, as a way to exert ideological control and influence over the southeast Asian Buddhist community and counter the influence of Indian Buddhism.[4]: 171–185 The temple promotes Chinese government-approved religious practices known as "South China Sea Buddhism."[4]: 171–185 The temple's religious messaging has been managed by the Chinese Communist Party's United Front Work Department since 2018.[4]: 171–185

 The official website does gush a bit:

The relation between Nanshan (the South Mountain) and Guan Yin bodhisattva (Buddha) is predestined and historically extended so long. It is said that among the Guanyin Bodhisattva’s 12 wishes, the second was to live in the South China Sea. Hence,Guanyin is also called South Sea Guanshiyin. Nanshan, located at the coast of South China Sea, resembles a huge legendary turtle, for which it was called Aoshan and deemed as Guanyin’s riding animal in ancient times. In Qiongzhou, the legend has passed for long that Guanyin has ever made the tour to the South China Sea in her effort to save the miserable masses. Everyone in this area praises her for her benevolence. According to the legend, the two islands Dongmao and Ximao were formed of some clay carelessly dropped by Guanyin when she flew with it on her tour of salvation. 

 Anyway, I wouldn't mind visiting Hainan.  Now I have more reason to... 

Thursday, April 30, 2026

Wednesday, April 29, 2026

And another thing (or three)....

*  I mentioned recently how the X "For You" feed had gone all weird, with "inspirational" personal stories and such like.  I forgot to mention, but it's hard to avoid - there has also been a big spike in sex related posts in that feed.  Not porn, but people posting on sex topics.  (A bit like the way the Slate website has for years run dubious sex life true experience stories.  "My husband's secret kink is too much of a surprise" type stuff.)    I wish it would stop...

*  Avocados in Australia seem to have become permanently cheap.   This is a good thing, except for the farmers who over-planted, I guess....

*  Back to X - could Naomi Wolf possibly get any nuttier?   And see some of the comments in the thread, too:  

 







An example of Trump adjacent grifting

There's a story at the Washington Post which I will gift link:  A Trump-branded nuclear power project thrilled investors. Then came the crash. 

Some highlights:

When the start-up Fermi America announced plans to build the Donald J. Trump Advanced Energy and Intelligence Campus near Amarillo, Texas, last year, investors clamored for a chance to cash in on the artificial intelligence boom sweeping through the U.S. economy. 

Wow, talk about your red flags, right in that name!   Putting the words "Donald Trump" in that close a proximity to "Advanced...Intelligence"?   

Next red flag:

Fermi’s co-founders include Rick Perry, energy secretary during President Donald Trump’s first term, and his son Griffin. The company said construction of the world’s largest data center would begin at the site by the end of 2025. It aimed to break ground on four nuclear plants bearing Trump’s name, it said in a news release last month, “on July 4th — with the President.” Investors valued the company at more than $13 billion after its initial public offering in October, making the stakes owned by Griffin Perry and the family of chief executive Toby Neugebauer each worth billions

Remember I've noted before how Rick Perry is a convert to the medical use of psychedelics (or at least, ibogaine)?    That whole movement has suspiciously grifty aspects to it.

Third red flag:  that Toby Neugebauer guy:

Neugebauer, who owns a 17,000-square-foot Dallas mansion modeled after the White House, and is the son of former congressman Randy Neugebauer (R-Texas), still sits on Fermi’s board...

And get this:

Neugebauer previously founded an “anti-woke” bank called GloriFi that offered credit cards made out of bullet casings. It went bankrupt months after opening in 2022. In bankruptcy litigation, Neugebauer was accused by GloriFi’s court-appointed trustee of spinning a “fairytale” to deflect blame for the bank’s demise and “complete disregard and disdain for corporate governance.” Neugebauer has denied the allegations in court and said misconduct by others caused the bank to fail. The case is ongoing. 

Anyhoo:

...stock traders have been fleeing Fermi since and dumped more shares last week after Neugebauer was forced out by the board. Construction appears to have stalled in Texas, and Fermi, which said in federal filings that it has never generated any revenue, has been unable to secure a tech company tenant for its planned data center. Griffin Perry and other company executives have cashed out tens of millions of dollars in Fermi stock in recent weeks, according to regulatory disclosures. Shareholder lawsuits accuse the company of overhyping its prospects for success, and its stock price ended Monday 81 percent below its public debut. 

Anyone who invested in that and lost money - it's hard to have sympathy...

 

Tuesday, April 28, 2026

The case for what might be called "self/no self agnosticism" in Buddhism

So, I was listening to another podcast episode from Tricycle on the weekend, and it featured an American who has long been a Buddhist monk in what is called the Thai Forest Tradition (first time I've heard of that.)   His name is Ṭhānissaro Bhikkhu, and I see from his Wiki page that he's 76 - way older than he sounded on the podcast, I reckon.

Anyway, the podcast episode had the title "Did the Buddha Really Teach That There is No Self?" - which is pretty intriguing, especially coming from a Buddhist monk.

It turns out, now that I have had time to Google him, that this monk has been arguing his take on this for a long time - an article here, from 1996, contains the key argument he was making in the podcast.  It's not very long, but given the inevitability of link rot, let's copy most of it here anyway:

...if you look at the Pali canon — the earliest extant record of the Buddha's teachings — you won't find them addressed at all. In fact, the one place where the Buddha was asked point-blank whether or not there was a self, he refused to answer. When later asked why, he said that to hold either that there is a self or that there is no self is to fall into extreme forms of wrong view that make the path of Buddhist practice impossible. Thus the question should be put aside.  

The Buddha divided all questions into four classes: those that deserve a categorical (straight yes or no) answer; those that deserve an analytical answer, defining and qualifying the terms of the question; those that deserve a counter-question, putting the ball back in the questioner's court; and those that deserve to be put aside. The last class of question consists of those that don't lead to the end of suffering and stress. The first duty of a teacher, when asked a question, is to figure out which class the question belongs to, and then to respond in the appropriate way. You don't, for example, say yes or no to a question that should be put aside. If you are the person asking the question and you get an answer, you should then determine how far the answer should be interpreted. The Buddha said that there are two types of people who misrepresent him: those who draw inferences from statements that shouldn't have inferences drawn from them, and those who don't draw inferences from those that should.

These are the basic ground rules for interpreting the Buddha's teachings, but if we look at the way most writers treat the anatta doctrine, we find these ground rules ignored. Some writers try to qualify the no-self interpretation by saying that the Buddha denied the existence of an eternal self or a separate self, but this is to give an analytical answer to a question that the Buddha showed should be put aside. Others try to draw inferences from the few statements in the discourse that seem to imply that there is no self, but it seems safe to assume that if one forces those statements to give an answer to a question that should be put aside, one is drawing inferences where they shouldn't be drawn.

So, instead of answering "no" to the question of whether or not there is a self — interconnected or separate, eternal or not — the Buddha felt that the question was misguided to begin with. Why? No matter how you define the line between "self" and "other," the notion of self involves an element of self-identification and clinging, and thus suffering and stress. This holds as much for an interconnected self, which recognizes no "other," as it does for a separate self. If one identifies with all of nature, one is pained by every felled tree. It also holds for an entirely "other" universe, in which the sense of alienation and futility would become so debilitating as to make the quest for happiness — one's own or that of others — impossible. For these reasons, the Buddha advised paying no attention to such questions as "Do I exist?" or "Don't I exist?" for however you answer them, they lead to suffering and stress.

To avoid the suffering implicit in questions of "self" and "other," he offered an alternative way of dividing up experience: the four Noble Truths of stress, its cause, its cessation, and the path to its cessation. Rather than viewing these truths as pertaining to self or other, he said, one should recognize them simply for what they are, in and of themselves, as they are directly experienced, and then perform the duty appropriate to each.

I take it from the podcast that there are certainly other Buddhist figures who disagree with this take, which (as I said in the title to the post) I think could be called an argument that Buddha's true position was akin to agnosticism on the self question, not "no self".

In the podcast he did make some pragmatic comments that went a bit further than that short article (there was talk of the idea of responsibility that is somewhat tied to "self" being a "skilful" thing to - how to put this? - maybe not "believe", but at least not dismiss.)  

  

Sunday, April 26, 2026

Of course a chronic narcissist would interpret it this way...

From the Washington Post:

“I’ve studied assassinations, and I must tell you, the most impactful people, the people that do the most ... they’re the ones that they go after,” Trump told reporters at the White House soon after a shooting suspect was apprehended. “And I hate to say I’m honored by that, but I’ve done a lot.”
Trump mentioned Abraham Lincoln, but not Ronald Reagan, who was injured in a shooting outside the same hotel in 1981.
His MAGA cult members should understand, then: their leader kinda gets a kick out of the attention a shooting attempt brings, so why complain about the number of Americans just trying to keep him happy that way?

Friday, April 24, 2026

China AI gets positive review

There's an article in the New York Times that has a positive take on China advancing into open source AI.  

The only thing I find a bit confusing about this is that I associate "open source" with "free", but you still have to pay open source AI if you're more than a light user, because, obviously, it costs to run it.   So, how does the "free to modify" aspect of open source AI models even work?  What is it that they modify, and where does the modification reside, so to speak.  

I'll have to ask an AI to explain!  

Anyway, some extracts from the NYT:

On Friday, DeepSeek released a preview of V4, its long-awaited follow-up model, which it intends to open source. The new model excels at writing computer code, an increasingly important skill for leading A.I. systems. It significantly outperformed every other open-source system at generating code, according to tests from Vals AI, a company that tracks the performance of A.I. technologies.

DeepSeek released its new model just days after Moonshot AI, another Chinese start-up, introduced its latest open-source model, Kimi 2.6. While these systems trail the coding capabilities of the leading U.S. models from Anthropic and OpenAI, the gap is narrowing.....

The competition to build the best-performing A.I. systems has transformed into a geopolitical power struggle. While Silicon Valley leaders at Anthropic and OpenAI warn that their technology would be dangerous in the hands of autocratic countries, China has invested billions to become an A.I. superpower, viewing the technology as a critical engine of economic growth.

DeepSeek’s open-source models are central to this strategy. While many Western companies guard their most valuable models, China has embraced open source and almost all of its top-performing systems are widely available.....

From Lagos to Kuala Lumpur, developers on tight budgets are turning to Chinese open-source models because they are cheaper to run and therefore easier to experiment with. Last May, Malaysia’s deputy minister of communications said the country’s sovereign A.I. infrastructure would be built on DeepSeek’s technology.

Chinese open-source models accounted for roughly one-third of global A.I. usage last year, according to a study by OpenRouter, an A.I. model marketplace. DeepSeek was the most widely used, followed by models from Alibaba, the Chinese internet company.

The article does mention Kimi, which is the Chinese AI I have spent most time trying.  I think it is not bad for certain things, and I even paid to use it for a month or two, but I don't think I will continue.

For what it is worth, this is what I am currently finding:

*   best AI service for research that will always provide the links for its responses and conclusions:                 still Perplexity.   (I am somewhat tempted to subscribe to it, but I saw a video that its policies were             voraciously unfair - along the lines of "anything you upload can be used by us" I think.  So, I don't             know.)

 * best service for drafting a document or clause for an agreement, or for summarising a lengthy                    document: Claude.   Essentially, it is the best one for English expression, I find.  It is also perhaps the         best for creative writing suggestions.  It's research can be quite good too.   

 *  best AI for technical questions (like, how do you address a computer or software issue):  Kimi.   

 *  worst AI for reliable research answers:  the Google/Gemini? AI that gives answers to a Google search.      It's answers can be remarkably and confidently wrong.  It really puts me off relying on Gemini as a            standalone AI.

I haven't used Chat GPT much lately.   It was the best for silly stuff - like if you wanted an AI to read your tarot cards, or converse while taking the role of a historical figure.  But for real use - not very confident in it. 

 

 

Odd things at X

Speaking of social media, as I was in the last post, I mainly look at X to see what the mad Right (and the appalling Elon Musk) is telling itself,  and I do visit daily.   The number of sensible people still using it seems to be diminishing even further.  (I should up my presence on Bluesky, which I still consider unfairly maligned.)    

But I have noticed in the last few weeks how the "for you" feed in X seems to have had a big increase in what I might call "story posts" - people (apparently) talking about what their husband/wife/child did the other day, and stuff like life lessons that came out of it.  A lot of them are non political - and most have no interest to me at all.  And many feel of dubious authenticity, too.

It seems that the algorithm is trying to "humanise" the place, or something?  But it just feels bland and of no interest.  

I wonder if anyone else has noticed?  Let me check.  Google's AI tells me it is not my imagination:

Yes, many users have observed a significant increase in personal, non-political "story posts" about family and life in the X (formerly Twitter) "For You" feed, a shift aligned with major algorithmic updates in early 2026.
This change is not coincidental; it is part of a deliberate move to make X more engaging and less politically charged, resembling a "text-based TikTok". Here is why you are seeing more of these posts:
  • New "Grok-Powered" Algorithm: In January 2026, X replaced its legacy system with an AI-driven, Transformer model (Phoenix update) that reads posts and matches them to user interests, actively prioritizing content that generates conversation and "real" interaction.
  • Prioritizing Personal Connection: The algorithm now favors authentic, personal stories (often referred to as "storytelling" or personality content) over just news or politics, aiming for higher dwell time and "relatable" content.
  • Reduced Political Focus: The 2026 algorithm update, influenced by a new product strategy, seeks to foster a more "welcoming" space, often filtering out or reducing the velocity of polarizing content in favor of higher-quality, personal engagements.
  • "Story" Post Structure: The platform is actively pushing conversational, story-like structures that encourage replies and shares, which the new AI deems "higher quality".

 Can't we just have old Twitter back??