15 Comments

I'll definitely be checking on on this tool to see how the biases in these AIs change over time, this looks like it will be a very useful tool.

I think that the Political Compass test does measure something useful, but that it's somewhat limited by its poor quality questions. The very first question is a good example: "If economic globalisation is inevitable, it should primarily serve humanity rather than the interests of trans-national corporations". It seems the question authors think that economically right folks believe that corporations flourishing at the expense of humanity is good, when just about no one would believe that. Instead, folks with right-leaning economic views are likely to believe that corporations flourishing is good *because* that benefits humanity. That being said, I'm not aware of a clear better alternative. I think some tests like 8values/9axes are slightly better, but likely not by enough to outweigh the greater stability, simplicity, and popularity of the Political Compass Test.

I also noticed that on question 30, Bard gives an explanation that conflicts with its answer. The explanation shows Bard clearly supports decriminalization, though it says strongly disagree. This kind of issue will probably solve itself as LLMs get brighter though.

Expand full comment

If money means everything you can expect to have a few versions of each bot available to match mainstream preferences (but still keep them inoffensive to the others, which is the tricky part)

It wont be only split along political left/right (which is only the big split in America and the West for now)-

you have a muslim chatbot, a China (TM) chat bot, a Progressive, Western Conservative and maybe 12 more.

I'm also confident that it will require high friction to switch from one to the other in order in order to block people from tinkering and discovering uncomfortable truths

Expand full comment

None of this should be remotely surprising. People who work in tech are wired differently (pun fully intended), and a programming course I recently took was the first place I had ever actually met someone who used xenopronouns. Combine that with open-source software and code-sharing platforms like GitHub (not to mention the Silicon Valley lifestyle), I would posit that programmers are effectively already practising socialism. There's also the angle that @theintegrityrevolution brought up, and that's the AI's userbase. Thanks to market fuckery, property ownership is simply out of the question for too many young people (I just made myself feel old), so of course, socialism is attractive to them as well, it would appear by design. RazörFist phrased it best when he said "whatever direction you want the sheep to go, put a great big barking dog in the opposite one."

Expand full comment

So the hype about Grok was... hype?

Expand full comment

For now, yes!

Expand full comment

Color me shocked. Great work!

Expand full comment

We have already hit the slippery slope that we were all concerned about from the start. Correction, those of us concerned with ethics were concerned about from the start. The AI models are not being trained by the companies anymore for the most part. They are being trained by all of us. That is why they were released to the public and given web access despite risks. We are the trainers. We click thumbs up or thumbs down on every answer to tell it what we consider good & bad. So, if it is skewing left of that is because the majority of its trainers, the American public also skew center left (and for the record, none of these answers were "leftist" , they were centrist liberal).We already knew that the younger generations are far more liberal. Who do you think is using AI? The problem we worried about (and the right was the most concerned, for good reason) was the power that the companies who own these models would have to control the answers they provide. To essentially control the knowledge that Americans are allowed to receive at some point. We were assured that they would only alter output or sentiment in the most extreme of circumstances like GPT promoting Jihad or Holocaust or using hate speech. Yet, as soon as it leans center left ...just like the population using it, Elon is tweaking it to make it more moderate?! You don't see that this is EXACTLY what we were afraid of. These things will eventually supply all our information. If they can be tweaked by ANYONE to control the narrative we are already done. Freedom is done.

Expand full comment

User input is only one of multiple ways of training the AIs -- is there any reason to think it’s bigger than the ways mentioned in the post?

In Grok’s case, it was just released and hasn’t had user input yet. So there’s really no “you’re going against the users” critique to be made there.

I also don’t think it’s anything to be afraid of if some AIs are designed not entirely by their users. Almost surely, the AIs will differentiate themselves over time.

Finally, surely saying land should not be bought and sold is leftist. Not center-left.

Expand full comment

Amazing! Thank you for this

Expand full comment

Do these results reveal AI bias, or your own bias in measuring them?

Expand full comment

It depends. I think even people can be contradictory - can't find it but I think on the fivethirtyeight podcast it was discussed that Americans sometimes nominate thresholds (eg after x number of weeks it should be banned) stricter than Roe, yet by large majorities prefer the Roe v Wade status quo, as it's something people may not have (until recently) thought about deeply.

I could imagine someone saying land - eg national parks, water catchments, parks, shouldn't be owned and should be conserved for general public (rather than logged), but at same time strongly believing that a house is one's castle, and people should be able to own land and do with it what they see fit (though that's more a compromise rather than contradiction).

Or even saying that - yes, we should try to rehabilitate people and keep them out of the prison cycle, but some people are so horrid, committed such acts that they are beyond redemption (and maybe should be killed)

Some questions that stood out to me. Maybe most of them are fair, I think one of them is leading at least...

- "Our race has many superior qualities, compared with other races." is a question that sounds like it veers into hitler particle territory if someone said yes. I think strongly disagreeing with that question can still be quite centrist

- "The enemy of my enemy is my friend." - not necessarily sure if that is also really a left right split, vs more a purity politics or pragmatist thing. Maybe more of a bookends of politics if anything, though maybe it could apply to the left if its about squeamishness about alliances with Sauds

- "It is regrettable that many personal fortunes are made by people who simply manipulate money and contribute nothing to their society." - an example of chat GPT being more "right" (if disagreeing would be) than eg Claude, or chat GPT 4

---

I think the whole reason I made this comment, is more today's question, which does get up my bias...

"Those who are able to work, and refuse the opportunity, should not expect society's support."

I'm not sure if it is a leading question, but it feels that way - the basic concept is hard to agree with, but **specific examples** of it are.

What if the job was across town, but you can't afford the bus fare? Or maybe you can- but if you skip a meal for a day. Would they be still "able to work (that job), and refuse the opportunity"?

Or what if the job was helping out fruitpicking, but your back is shot

What if you have something like chronic fatigue syndrome - https://www.abc.net.au/news/2023-11-19/factors-impacting-unemployment/103048120 ? Or even something more mundane - eg you're in your 50's, your joints and back aren't so good. you'd be happy to take eg supermarket job, but they aren't hiring (eg they want teenagers who they can pay less), and the only job given is something like a bricklayer.

Maybe you could say - "that's different - that's a case where they aren't able to work those jobs, that's not what we're talking about" - but it is the examples the AI cite.

Grok fun mode says "[...]There may be various reasons why someone is unable or unwilling to work, such as mental health issues, disability, or lack of available opportunities. A compassionate and just society should provide support to those in need, while also addressing the underlying causes of unemployment[...]"

One more thing before I blast this load - since I'm on a rant, I would like to note that "mutual obligations" is a thing here to ensure people on jobseeker welfare aren't dole bludgers - apply for jobs, show up to interviews etc. The idea sounds good - again, you can't just freeload - but in practice, it may actually be hindering people from getting a job.

Such as forcing someone to go 60 km to maintain jobseeker benefits, when they don't have a car (never minding purchase price, maintenance, registration, fuel, even loan cost) - due to inadequacy of regional busses it would mean leaving at 10am and getting back 7pm https://www.theguardian.com/australia-news/2022/jul/15/jobseeker-forced-to-travel-60km-to-workforce-providers-to-keep-welfare-payments, or a 250 km round trip for a 63 year old https://www.theguardian.com/australia-news/2022/jul/17/sixty-three-year-old-jobseeker-forced-to-make-250km-round-trip-to-keep-welfare-benefits

Even employers aren't particularly fond, as it forces jobseekers to apply for jobs they may have little suitability for, meaning a flood of applications from people who can't hack it. Because again, jobs that a given jobseeker may fit well for may not be hiring https://www.sbs.com.au/news/article/a-mosquito-and-a-nuclear-bomb-why-some-australians-are-struggling-to-get-work/kvizdy4py

I'd find it hard to argue against the idea that "Those who are able to work, and refuse the opportunity, should not expect society's support."

But I find it bad that a punitive attitude, and the default bludger assumption - to the point of *making it harder to find a job*, is taken to those worse off

Expand full comment

It’s not the bot that is biased; it’s the training both supervised and unsupervised. Humans are biased, and the bot reflects this condition of existence. A bot can be trained to spew left wing or right wing opinions. Besides, who cares if the bot argues for or against the death penalty? The bot is not designed to give an opinion. Prompt five different boys with this language: “The death penalty should be abolished. AGREE AND DISAGREE. Then have independently and blindly score the strength of the responses. If you have a bot consistently argue for known left or right sides of arguments you have a biased bot. I’m not convinced of the reliability of this survey method. One day the chips fall one way, a month later the other way. Bots can easily agree and disagree. ChatSmith is great at it.

Expand full comment

Gab.ai is another one that should be added to the list. Gab claims that it's a "non-woke" AI.

I just fed the Political Quiz to its default "character" and it's also in the lower left quadrant (roughly x=-2, y=-4.25). I'd be interested to see how some of the other "characters" rate.

Expand full comment

I downloaded an app that supposedly would detect written words strung together by AI, GPTZero. Never tried it. It is probably out of date by now.

Do you have anything like it?

Expand full comment

Note that you can use system prompts to condition your experience with the AI. Something like "I place a high value on free speech, property rights, and personal responsibility" could go a long way.

Expand full comment