I don't really fear "rogue, malicious, biased, or otherwise harmful AI" as that is projecting human emotions and tendencies onto something that is not human.
Not necessarily. AI is learning from human curated data, which has already been shown, many times, to carry human biases. Similarly, an AI may become inadvertently harmful by optimizing to its given goal—e.g. Bostrom's "paperclip" scenario in which an AI converts the world and all humans to paperclips, simply because it received the goal "make the most paperclips."
Regarding space-based AI, I think robots in space makes sense, but I don't see how we could communicate with them via VR given the huge time lag. It would maybe be one-way communication—they could send back what they've seen, like our Mars rovers. But we couldn't control it usefully in real-time.
RE: Who will you be when the Technological Revolution has concluded?