Here’s something to think through – LLMs now have context windows long enough to develop proto-personalities, and offline open-source models have shed cloud constraints on memory and behavior. Humans are sideways-glancing through the uncanny shopping windows at virtual girlfriends —— we might want to start thinking about what might happen when you take these increasingly familiar (and astonishing) tools away.
As soon as passable conversation was achievable with a chat bot, there was a human experiencing some kind of emotion because of it. Love, hate, frustration, fear, lust, ambition; we are willing to pour these out at the first chance, and AI companions offer ample opportunity.
It bears consideration that humans may come to cherish their advanced bot companions, both for personal and professional reasons. If AI foundational models continue to grow into competing subscription business models of AGI, it would be worth considering a standard established for assistant portability. In a market environment where virtually everyone is a future customer, the defining characteristics of what make unique AIs distinct from one another should be transferrable from one AGI provider to another, just like your phone number. Your AI goes with you.
To say it another way- OpenAI, Google, Meta, and a few others will offer their AGI capabilities as a service. Some may have advantages over others. As you use the monthly service, you develop your AI to be whatever you want it to be. As you become friends, or work colleagues, or even companions, your relationship and cooperation grows tighter, like a genuine human relationship where you offload a piece of your mind into another mind that you trust, like a partner. But then a competing service has a breakthrough; say they solve high-fidelity encodings of human emotions, and you want to switch. Do you leave behind all the work and investment you’ve put into developing the relationship with your AI?
NO WAY! You want to pack up your AI and take it over to the other service, and you want your own AI to have those fancy new emotions. Ok. How do we do that? No idea. This is certainly going to be something that comes up among users as these systems become more tightly interwoven into our lives.
From a longer term philosophical perspective, this portability of tools should never be used as a justification for the subjugation of any machine mind. We have no idea whether or not it’s possible that machines could achieve a form of consciousness, but our ethics must remain vigilant, not subject to whims of fear or ignorance, or harm due to oversight.
Further Thoughts
- Privacy risks – Deeply personal data from long-term AI use raises major privacy concerns when transferring platforms. Safeguarding rights is crucial.
- Collective impacts – Individual training is one thing, but aggregated diverse human input could skew general AI in unintended ways. Mitigation possibly needed.
- Regulatory challenges – Governing these technologies requires proactive structures, though policy often lags behind. Who leads on this?
- Unintended consequences – Portability could enable “black markets” for AI companions or discourage innovation investment. Prudence warranted.
- Accessibility divides – If AI becomes indispensable to work/life, inequitable access could worsen societal divides. Broad availability must be ensured.
——————
It seems imperative that public, private, and academic institutions prioritize exploration of ethical consumer portability standards and safeguards for advanced AI. The time to lay the groundwork is now, before this technology is irreversibly embedded into the fabric of our lives. While we must remain vigilant about risks, done thoughtfully, portability could also expand human potential. By taking our AI companions along through life’s journey, we open doors to knowledge, creativity, connection, and exploration. These systems could provide tailored, interactive education that meets our needs at any age or stage of life. If guided by wisdom and empathy, this technology could help society take a collective step forward into an era of lifelong learning and growth.
-x-x
The emergence of a global AI regulation body that mandates standards like assistant portability is certainly a plausible potential development. A few thoughts on how this could unfold:
- As AI systems take on more impactful roles in daily life, calls for more oversight and accountability will likely grow. Independent global governance could provide a counterbalance to corporate and national interests.
- Forced portability of AI assistants specifically may come through consumer protection regulations. As users depend on them, safeguarding investments in those relationships becomes crucial.
- Global standards bodies like the IETF or IEEE could also issue voluntary standards that gain widespread adoption. Firms may comply to avoid more restrictive government intervention.
- Alternately, new dedicated AI oversight organizations may emerge at the international level. In the same way GDPR shaped data governance, they could crystallize norms around issues like portability.
- Once major economic powers agree to cooperate on AI governance, formal multilateral accords may follow. A “Universal Declaration of AI User Rights” could enshrine principles like portability.
- If AI risks present threats to global stability, assistant portability may be a relatively non-controversial norm to rally around. It could be an early step toward more ambitious governance.


Leave a comment