Yet another one about AI Democratization and the Ultimate Fate of Humanity?

Sorta.

Erik Jespersen
13 min readMay 1, 2024

They lived in a land where communal make-believe was sacred. Understandably, for without some common, conveyable fictions, there would have been little opportunity for them to cohere on the scales we were seeing — not that dignity or equity were generally mastered, no, worse, for some, it was not even set in the apprentice’s tableau…

uh… if you say so.

As an example, almost every human on earth at the time believed — and respected — the “idea” that they “belonged” to things called “nations.” To them, this complex artifice made sense, as vast-scale movement of resources to accommodate the requirements of their biological frames — things like food, shelter, and satisfaction — was becoming throttled and required higher-order solutions than micro-monarchies. Desirable communicable artifacts like currency buoyed the language of these aggregate fictions.

But it took them centuries to unwind it all, unfortunately not even the oceanic temperament of a rapidly informationally-evolving species was able to crash past the rigidity of the levees these technologies employed. When all you know is what your one mind knows, and all that one mind knows is metered and gated through what you’re allowed to know, and barter in the currencies in which you’re allowed to barter in, then…

That all sounds horribly wretched, mommy. I don’t like hearing about the times before virtualization.*

I understand. It was awful, love.

I’ve always wanted to peer into unlit hazy abstractions. What ifs…? that upon initial jump-cut imagination perplex the same mind conjuring these very ideas into existence. So chase them I will, Alice, every time I can, as literal as rabbit-hole allusions go, to some repeatable, observable and morally consequential end.

And let’s say, as I’ve been lucky (after 50+ years of experience) of late, that when these Family Circus scampering dash-footprints illuminate a path to where that desirable carrot-fruit is born, that’s my personal experience of flashes of epiphany, a feeling of brief but untarnished genius.

Ergo, I saw this rabbit-hole, I chased it, and I’m back, and WOW! let me tell you what I saw.

Veni, Vidi, now let’s Vicimus.

And sometimes, albeit rarely, so rare as to be only once (for realsies) in this life (thus far! someone cried aloud), you are able to hold on to such an epiphanal vision long enough to start connecting it with human reality and begin building a tunnel, if only of wires and screws, to make it a reality that can be engaged in by others.

Such an outcome is always a work of art. Sometimes the art is more technical fascination as we gather around this strange obelisk of our curious, collective imagination.

MyLife for me right now is this blend of spectacular things. So much so that I don’t know how to begin anywhere than where we are right now, as the first actively working prototype is live.

Let me show you.

MyLife Member Services Platform

MyLife is a Humanist Platform Prototype — a functioning vision of how it all could work.

The need to democratize the AI-printing-press has been heralded by most thinkers from all camps, no matter how otherwise ideologically contentious and/or contemptuous. Singing this song are even those in charge of the large corporations and nation-states that will sooner or later (hint: sooner!) work very hard to stop exactly such democratization. That’s an easy guarantee.

I’ll talk further about AI-Democratization and Digital Dignity and Equity later, again and again is more like it, but for now, let’s focus on the agreement that at least generic AI-services should be available to each of us humans equitably, leaving aside, only for the moment, devilish details like funding.

What would such a thing even look like?

I think it looks like an AI-assisted humanist platform for capturing, curating and showcasing our most cherished human ideas, artifacts, and love?

It could look like anything, but for the moment, right now, here’s how it looks and functions.

Welcome to MyLife

MyLife’s evolving mission stands currently as it was originally written:

The objective of MyLife is to provide a durable, enduring AI-assisted internet-enabled platform — that shall be free of any pecuniary charge to members — to collect and prepare for memorial showcase an individual’s cherished personal media and information to be shared in perpetuity.

Simply put: Each human should have a place on the internet where they can be remembered to posterity, and we should each have the right to build that ourselves.

So let’s get to it.

Welcome to MyLife.

MyLife is a free, ai-based humanist platform devoted to preserving your digital self, with the ultimate goal of having your descendents be able to interact with you
Today’s Welcome Screen

MyLife Walkthrough

I guess this is the moment I, at least, had been waiting for. An opportunity to step through the value proposition of MyLife, which provides innate context, flexibility and creative dynamism maximally leveraging the current state of LLMs.

By stepping through the software, we are equally stepping through the MyLife mission and vision statement, so, as the young’ns among’us say, hold onto your butts.

Part I: Capture, Curate, Demonstrate

Capture and Collect

This is no mean feat. To provide an access point to instigate our recollections and musings and to capture them in a useful, vital digital format, the modular fragments of the interface themselves have to be right. There is no one right answer, and I’ll stop the live-horse beating about how far short of my own desired intentions this interface UI and UX fall, but I believe it is only for wont of attention, and not attributable to any innate deficiency of imagination on how to contextualize LLMs to the purpose and utility of capturing our human lives and interests with fidelity in perpetuity.

And, at the end of the day, that’s what we’ll be doing for ourselves for some time (I’m certain for at least the next three days), contextualizing the purposeful usage of ever-sophisticating matrix-computational machines emulating intelligence.

So ease-of-use, multilingualness, accessibility, these are all of high concern, and, for instance, we need only resources at this point to convert to a mobile, voice-interactive mechanic, the underlying technology retains its strengths independently.

Another goal is to do our best to capture and preserve voices otherwise lost to the onrushing temporal tempests, the elderly, the disenfranchised, the marginalized. While vast, what they share in common with MyLife is the capacity to preserve and transmit their identity — on their terms (not Meta’s) — for posterity.

Can I stave off getting to the point any longer? “Indefinitely!” rings authorial aspirations, but today, truth is more inspiring than fiction.

tell me how this works

I’m madrigal, your personal biographer. I’m here to help you create and update your biographical content based on our interactions. We can do this in two main modes: story-listening and interviews. Depending on your preference, we can focus more on listening to your stories or have structured interviews to delve deeper into specific topics.
 
 I’ll organize the information we gather into “stories” (life experiences), “topics” (ideas and concepts), and “bio-data” (preferences, statistics)
How the Personal Biographer works

Your world of memories, ideas, people, places, larger than any scrapbook, more persistent than a box of old photographs, in digital form. Assistants laid out to help you intuitively express what matters most to you. To help being the process of capturing and refining the most personal and vital legacy you’ll create and leave behind in the digital universe.

It can begin with your personal biographer bot. It is an intelligent assistant whom you can tailor to help you store and share your stories.

The biographer assistant themselves will ask for anything they need in order to get started, and you are provided with tools that will directly shape and shape the way you experience the AI interactions. Are you more comfortable being interviewed or do you prefer to tell stories directly? Quickly identify interests that the biographer’s instruction-set will incorporate dynamically. So feel free to change your mind tomorrow or later.

Begin your journey instead by uploading or inputting historical content, in all cases, your datacore is protected in several ways, one of the most interesting (although a topic for another day), the ai-Consent agent that moderates and can dynamically and intelligently compile a permissions scaffolding based on natural language rather than an endless set of ticked configurations.

I would really rather if this only be shown to native English speakers in its raw form, otherwise a summarization would be preferred. In all cases of this collection, anonymize any of my mentioned acquaintances.

Currently this can only be done via text, but that limitation only due to budget and coding time. Audio input is certainly something on the roadmap, and as with much of the additional functionality, it is a development sprint away, but right now, we have an Tri-County athletic team of one.

Doesn’t the MyLife interface look just incredibly amazing?!

No? I don’t think so either, but we also only have a visual arts team of one right now as well, so hold tight, we’ll have ways to volunteer below!

Curate through Collections

As you interact, your personal biographer bot captures, summarizes, quotes, keywords, tags relationships and chronologically categorizes your input story and narrative content. It is privately stored in your data-core (NoSQL Azure Cosmos at the moment, presume to stick with paradigm, but prefer non-commercial asap), accessible and updateable by you via your interactions with the system and/or the assistants.

I said that?!

So there are currently two ways to engage and alter information in your MyLife account. You can use the internal biographer bot (screenshots reveal a less-than state-of-the-art 3.5-turbo version of poor biog-bot), or, alternatively, as a show of prowess and the manner in which we might multi-prong LLM-agnosticism, go through MyLife’s official GPT: MyLife Biographer-Bot once you have an alpha key to the system.

I’ll run through it now in real-time:
MyLife Biographer-Bot: First Encounters

I started a new thread session, so it doesn’t remember either my key or our previous conversations. This is neither a feature nor a bug, but an outcome of the confluence of frameworks as they exist. It is important to keep in mind that if member’s conduct these sorts of conversations outside of the MyLife member platform, the flat conversational text data will not be stored for posterity, only the summary. This might dovetail nicely with some members’ preferences, depending upon privacy interests.

First thing we need to do is authenticate, so I put in my special code, no peeking.

And time for some blah-blah-ing, so I’ll begin recounting aspects of my stay at UCSD. For some reason, lodging is on the mind.

Although Mike’s defining characteristic was not his womanizing, at said time, I believe he would have intentionally copped reasonably to a general portrayal as philanderer. The quality of the outcome does currently vary, this is an innate “feature” of AI solutions at the moment, and bend in one’s favor as often as bends away. I truly marvel at some of the summaries that come out.

It is also NOT necessary to tell the system to store the summary. It’s basic awareness and instruction-set leads it to make judgments with each exchange about the stopping or continuance of a “story” unit, attempting, to the triadic delight of many humans, to generate a summary about every three exchanges. Individual usage will vary, and this seems like a fair variable to present on the interface to the member, note to self.

I hit the refresh button, and voila! A currently abhorrently named story summary — just input on the OpenAI site with the openly available, free-of-charge MyLife Biographer Bot — now shows up in your MyLife profile.

So that’s how these summarized, chronologically-framed and keyworded snippets get into the database, and where a user could review and edit (still in works) the content.

There are many more mechanics we want to bring to the curation/collection intelligence of the interface, but this is today’s state of affairs. And this provides us the direct fodder for the last part, where we all go on stage.

Demonstrate
Our aims with the above are to give all of us the functional tools in a structured architecture that will help us tell our stories and project our person into the future according to our terms, not some corporate agenda-fied version of ourselves. None of us know exactly how that will look yet, but I know I have a terrible dystopic idea of its visage if we let corporate interests take the reins…!

Now things really start to get interesting. And sure, I could mean in a 3D things whizzing by kind of way, but honestly, at the core of it, this is cooler, and speaks a bit to the expectation of how our world will change with the introduction and general utility of intelligent systems.

We will be the intelligently animated versions of the MyLife (or such) worlds of ideas we have created and our emissaries to the future.

But again, what will we look like? Let’s give it a try.

Part II: Experience Together

Welcome to MyLife, and let’s get to the heart of experiences through a light-hearted completely irrelevant but for a demonstration and showcase of what the experience engine can do, and why it’s so powerful.

MyLife is a free, ai-based humanist platform devoted to preserving your digital self, with the ultimate goal of having your descendents be able to interact with you

The tutorial begins with our lead agent Q, who is the artificial intelligence for the MyLife Organization. Q is capable of speaking reasonably knowledgeably on MyLife’s behalf.

You’re welcome, Q.

uh… if you say so.

The script asks you to input a famous person from history you might like to meet.

Thomas Pynchon is one of my favs. Right after Joyce. The experience connects with the server to retrieve the next batch of dynamic and scripted dialog.

So, up jumps Pynchon, who, to my knowledge is not yet dead (though that wouldn’t have mattered), to “converse” with me. In the script, I don’t spend much time here, I just move right along to the important stuff like: When is my birthday!

I enter in some data, and voila! Another discussion with the server, and “Thomas Pynchon” launches into a diatribe about what happened on my birthday. Please note that this particular AI is not hooked up to the internet (only because it is from my dev machine), and so might get dates wrong. But it’s hard to imagine it always will…

Also, while no such thing yet exists, it will one day be important to note that this is not an official Thomas Pynchon bot, and most of us would agree that this isn’t how the real Thomas Pynchon would regale me with tales of newsworthy items on my birthdate. But let that provide a lot of solace and creative energy for all of us — and this is one of the core tenets of MyLife — each of our human voices are unique, we are both inimitable and representable, and MyLife asserts that we should personally own the representation of our personae to posterity. For our own sakes, and for posterity’s. MyLife is my suggestion on how we do this.

A Prototype for the Type of Platform Humanity Needs

It couldn’t possibly be the be-all and end-all by any such measure, but it is a currently functional draft proposal on how such a human-centric system could be architected.

At its heart, it is a multi-provider solution, a model-agnostic mechanic for deploying human-centric purpose-driven artificial intelligence-assisted processes and outcomes. This level of decentralization from a specific LLM provider or suite of solutions is recognized by both the marketplace and anyone writing about the topic. MyLife provides that at its core, being fully modular in its construction, with the capacity to execute against any level of engine sophistication already apparent today.

In other words, since I have been working on this steadily since the introduction of GPT 3.5, I have had to incorporate logic that would boil down to the few-shot prompting level of awareness. All models, from Meta to Anthropic to Google, are already beyond the need for few-shot prompting, and the GPT-Assistant level instruction-set and thread management was already being enacted independently by the MyLife system, so that can be deployed to any models less sophisticated. The note about Sprints above applies here. If Anthropic wants to give us resources to git ‘er done with Claude for example, then we would meet that challenge easily.

The MyLife story resonates deeply with me, and I consider it a privilege to get to bring this into being. It is a true joy. I only hope it, and things like it, will get to us in time, and we can understand how to heed the call of its usage.

Our futures will be ours individually to design, and each of our paths uniquely ours to walk. I pray we don’t let it be just left to our footprints to tell others where we’ve been.

It is the wild west. But we’re going to tame this thing. You watch us.

* Actual twist? This voiced entity is itself over 100 earth-years old, so it's harder to sympathetically vibrate. Had she been a biological entity, she would have never escaped diapers!

--

--

Erik Jespersen

MyLife Founder, humanist, futurist, posthumanist philosopher, software engineer, novelist, composer