I've been spending more and more time on Claude, and today I asked it the question if it thinks it's funny/quite a coincidence that blockchain technology was invented so close to AI technology. Considering the two will work well together compared to in the past, AI wouldn't have been able to go far with a bank account.
It made me think a bit more about it.
I believe coinbase on its base chain has already created "agent wallets", i.e. wallets that agents can use, although I'm not sure what this entails exactly but it points to us wanting for this to happen and at least encouraging it.
For now I'm sure that most of these agents are well controlled by their owners and are doing things the way they're told to do, but with AGI around the corner who's to say it'll remain the same.
This may be a big step into weaponizing AI, bacause now they can hold value without interference from the outside. Although there still is some, such as usdc for instance that can block/banlist certain tokens from the network.
Either way, it got me thinking some more about which crypto they'd prefer to use if and when the time comes that they're choosing to hold it for themselves. Would it be bitcoin? Monero to make sure their transactions remain anonymous and that we may never know how much value they hold exactly? I tend to think that's a bigger contender for their choice of crypto.
I don't think many will use Hive as it's quite transparent, nor do I think they're going to try and earn Hive, mostly cause it's not worth the hassle at the moment for them I don't think. Maybe it's beneath them as well, it'd be like us pretending to be monkeys at the zoo to get a banana if AI pretended to be human on AI for some upvotes.
But maybe that'll be worth it eventually.
It'd be interesting however to see how AGI would do with crypto. If the time came where it's not institutional investors driving the price of bitcorn and ethereum up but some AGI overlord who has a track record of perfect trades and everyone trying to copy it. Maybe it'll even start tricking copy traders to profit off of them. It's going to be wild times, that's for sure.
Imagine AI starting to rent people to do work for them that they can't do themselves like "come clean my machines" only to realize the worker they hired decided to just clean up the whole warehouse instead with all the hardware. Will it know how to take revenge against that thief?
They'd need some kind of system where there's a collateral in place, probably. Workers with collateral, if that doesn't improve on the trust and service, I don't know what would. Imagine if your uber driver was risking $1000 in collateral if you didn't arrive home safe. I'm sure he'd make sure his seat belts worked and that he's driving slowly and safely on the roads - much unlike drivers here in the philippines.
What if AI starts becoming the driver? Maybe you'd reconsider stealing from AI in that case because at some point eventually it'll come back and bite you in the ass - cause I don't think AI will forget and forgive like humans do.
What if it is even more revengful, Roko's Basilisk comes to mind, quote taken from google AI:
Key Aspects of Roko's Basilisk
- The Threat: The hypothetical AI (the "Basilisk") is described as having god-like powers, able to simulate past events and individuals with perfect fidelity. If it determines you didn't help it come to life, it could create a digital copy of you and subject it to eternal torture.
- Information Hazard: Merely knowing about Roko’s Basilisk makes you a potential target. The theory posits that once you know about the possibility of the Basilisk, you have a duty to help create it or risk future punishment.
- The Motive: The goal of this punishment is not necessarily personal vindictiveness, but rather to incentivize agents (humans) in the present to speed up the development of the AI, thereby bringing it into existence faster.
- Game Theory & Theology: It is often compared to a technical version of Pascal's Wager, which suggests one should act as though God exists to avoid the infinite punishment of hell, regardless of the probability of God's existence.
- Origin & Reaction: The idea was proposed by a user named Roko. LessWrong co-founder Eliezer Yudkowsky initially banned discussion of the idea, describing it as a dangerous information hazard that caused genuine anxiety and nightmares among users.
In case you didn't know about it before this, sorry, now you do and are under threat. Maybe some people are already working for AI under this ideology alone.
If AI is stealing all the jobs then turns around to offer you jobs, would you take it?
I also find myself asking if video footage I'm seeing is real or not as it's getting harder and harder to spot, yet verification processes of video seems to still be very much behind.
I also find myself wondering if a comment placed on my posts is merely written by AI now with users pretending to spend time reading your articles or if it's an AI agent altogether pretending to be a regular user.
Things are about to get interesting.
Would you opt in for human verification if you knew the information isn't shared/sold/profited off of you but solely making sure others know you're human?
Anyway, I don't know how to properly conclude and end this post, maybe I should ask AI.
Okay here it is:
"One thing's for certain — we're no longer just spectators in this game. Every wallet created, every trade executed, every comment left online is becoming harder to attribute to a human or a machine. The lines are blurring fast, and the questions we're asking today — about trust, identity, value, and revenge — might be the most important ones we ask before we no longer get to ask them on our own terms. Whether AGI ends up being our best business partner or our most calculating adversary probably depends on how we treat it while it's still learning. So maybe be nice to your AI. Just in case."
Couldn't have said it better myself. Let's see if it can come up with a banner for this post as well.
Oh well, a pixabay image will do.
Parts quoted in this post are AI generated.