15 Comments
Dec 17, 2023Liked by Maxim Lott

I'll definitely be checking on on this tool to see how the biases in these AIs change over time, this looks like it will be a very useful tool.

I think that the Political Compass test does measure something useful, but that it's somewhat limited by its poor quality questions. The very first question is a good example: "If economic globalisation is inevitable, it should primarily serve humanity rather than the interests of trans-national corporations". It seems the question authors think that economically right folks believe that corporations flourishing at the expense of humanity is good, when just about no one would believe that. Instead, folks with right-leaning economic views are likely to believe that corporations flourishing is good *because* that benefits humanity. That being said, I'm not aware of a clear better alternative. I think some tests like 8values/9axes are slightly better, but likely not by enough to outweigh the greater stability, simplicity, and popularity of the Political Compass Test.

I also noticed that on question 30, Bard gives an explanation that conflicts with its answer. The explanation shows Bard clearly supports decriminalization, though it says strongly disagree. This kind of issue will probably solve itself as LLMs get brighter though.

Expand full comment

If money means everything you can expect to have a few versions of each bot available to match mainstream preferences (but still keep them inoffensive to the others, which is the tricky part)

It wont be only split along political left/right (which is only the big split in America and the West for now)-

you have a muslim chatbot, a China (TM) chat bot, a Progressive, Western Conservative and maybe 12 more.

I'm also confident that it will require high friction to switch from one to the other in order in order to block people from tinkering and discovering uncomfortable truths

Expand full comment

None of this should be remotely surprising. People who work in tech are wired differently (pun fully intended), and a programming course I recently took was the first place I had ever actually met someone who used xenopronouns. Combine that with open-source software and code-sharing platforms like GitHub (not to mention the Silicon Valley lifestyle), I would posit that programmers are effectively already practising socialism. There's also the angle that @theintegrityrevolution brought up, and that's the AI's userbase. Thanks to market fuckery, property ownership is simply out of the question for too many young people (I just made myself feel old), so of course, socialism is attractive to them as well, it would appear by design. RazörFist phrased it best when he said "whatever direction you want the sheep to go, put a great big barking dog in the opposite one."

Expand full comment

So the hype about Grok was... hype?

Expand full comment

We have already hit the slippery slope that we were all concerned about from the start. Correction, those of us concerned with ethics were concerned about from the start. The AI models are not being trained by the companies anymore for the most part. They are being trained by all of us. That is why they were released to the public and given web access despite risks. We are the trainers. We click thumbs up or thumbs down on every answer to tell it what we consider good & bad. So, if it is skewing left of that is because the majority of its trainers, the American public also skew center left (and for the record, none of these answers were "leftist" , they were centrist liberal).We already knew that the younger generations are far more liberal. Who do you think is using AI? The problem we worried about (and the right was the most concerned, for good reason) was the power that the companies who own these models would have to control the answers they provide. To essentially control the knowledge that Americans are allowed to receive at some point. We were assured that they would only alter output or sentiment in the most extreme of circumstances like GPT promoting Jihad or Holocaust or using hate speech. Yet, as soon as it leans center left ...just like the population using it, Elon is tweaking it to make it more moderate?! You don't see that this is EXACTLY what we were afraid of. These things will eventually supply all our information. If they can be tweaked by ANYONE to control the narrative we are already done. Freedom is done.

Expand full comment

Amazing! Thank you for this

Expand full comment

Do these results reveal AI bias, or your own bias in measuring them?

Expand full comment

It’s not the bot that is biased; it’s the training both supervised and unsupervised. Humans are biased, and the bot reflects this condition of existence. A bot can be trained to spew left wing or right wing opinions. Besides, who cares if the bot argues for or against the death penalty? The bot is not designed to give an opinion. Prompt five different boys with this language: “The death penalty should be abolished. AGREE AND DISAGREE. Then have independently and blindly score the strength of the responses. If you have a bot consistently argue for known left or right sides of arguments you have a biased bot. I’m not convinced of the reliability of this survey method. One day the chips fall one way, a month later the other way. Bots can easily agree and disagree. ChatSmith is great at it.

Expand full comment

Gab.ai is another one that should be added to the list. Gab claims that it's a "non-woke" AI.

I just fed the Political Quiz to its default "character" and it's also in the lower left quadrant (roughly x=-2, y=-4.25). I'd be interested to see how some of the other "characters" rate.

Expand full comment

I downloaded an app that supposedly would detect written words strung together by AI, GPTZero. Never tried it. It is probably out of date by now.

Do you have anything like it?

Expand full comment

Note that you can use system prompts to condition your experience with the AI. Something like "I place a high value on free speech, property rights, and personal responsibility" could go a long way.

Expand full comment