Even though we have carefully designed Olympia assistants to act like humans, their Large Language Model (LLM)-based AI and our technical architecture has limitations that should be considered in order for you to get maximum benefit and enjoyment out of your Olympia team.

Olympia can’t run your business (…yet)

We seen new users jump into Olympia with overly ambitious goals for their AI team members. One asked our copywriter to write 20 ebooks on a series of subjects, in a single prompt, and then got upset when the assistant could not comply.

The bottom line is that at the moment, our assistants cannot function as fully (or even semi) autonomous agents. It’s not because such behavior is not possible, but because the computing resources to accomplish that behavior are still prohibitively expensive at our price points.

As costs come down, we absolutely plan to release more and more agentic behavior, starting with things such as your assistants being able to “wake up” at certain parts of the day or in response to external stimulus in order to do certain tasks.

Context Limitations

Our AI does not have unlimited context to consider when responding to requests. Context size is measured in tokens, which is the atomic unit of language used by LLM “brains” when “thinking” about something.

<aside> 💡 You can use this tool to understand how a piece of text might be tokenized by a language model, and the total count of tokens in that piece of text.

</aside>

Everytime you interact with one of Olympia’s AI assistants, they have a maximum of about 100 thousand tokens of content space available to work with in their context, which you can think about kind of like short-term memory. It’s the amount of text that they can consider when formulating a response. This means that in longer conversations, the AI may not be able to take into account the entire conversation history when formulating responses.

Our technical architecture accounts for these limitations by the use of long-term memory. As our assistants encounter anything that they consider “interesting” information in your chats, they store it in their long-term memory, which is implemented using a vector database. You can prod the assistants to store key information by telling them explicitly to remember or “take notes” about particular topics, and in fact we encourage you to do so.

Maximum Message Sizes

Because of context limitations, we limit the size of messages that you can send to your assistant via our interface to 20 thousand characters. The content of any message stored as part of a conversation cannot exceed 50,000 characters, which means that data added to the conversation by assistants as a result of web searches or other kinds of third-party system integrations (such as Github or Google) might get truncated when stored in our database.

Attention Span

LLM-based AI has trouble paying attention to details in the middle of long contexts. It’s a challenge inherent in today’s technology itself and is not specific to Olympia's implementation. As a result, in longer conversations, the AI's attention span may be limited, and it may struggle to fully grasp or recall specific details from earlier parts of the conversation.

The solution to this particular limitation is clear and well-established:

DO NOT CREATE LONG CONVERSATIONS!

Every time you have a new topic to discuss with your team members, click on them in the top half of the sidebar to start a new conversation. Do not simply start a new topic in an existing conversation. You will absolutely get better results.

No Asynchronous Work

Because LLM-based AI works on the basis of text completion, it’s relatively easy to get our assistants to inadvertently promise to work on a task and get back you about it later. Don’t be fooled, though. At the moment, our assistants do not have so-called “agentic” behavior, that is, they cannot go away and do tasks without you.

<aside> 💡 We call promises of work to be done in the future “stalling” and we have systems that try to actively prevent our assistants from doing it, but they’re not perfect.

</aside>