Aapeli Vuorinen

LLMs for friends, family, & more

This is a joint project with Val Natanelov.

Large Language Models (LLMs) are the greatest technology I have witnessed since becoming a thinking adult. They are useful for so many things, and learning to use them is key to thriving in today’s world. In the same way knowing how to use a search engine was an important skill before LLMs, knowing how to use LLMs is crucial skill after they came out.

Get access

We want to support you in learning to use LLMs safely, while giving you the ability to try out all the new models and tools as they are coming out.

To get started, click the following button:

Axess-AI.com

Click on “Sign up”, and fill in your name/email/password. Once you’ve done that, you’ll see a blank screen: message one of us on signal and we will approve your account!

Models

Recommendations last updated 7th April, 2025.

As you probably know, different companies have different models. There are also different types of models, such as “thinking” models. Here is a bit of a starter on what models to use.

Everyday models

If you are doing everyday things, we recommend:

Thinking models

Thinking models use something called chain of thought. Normal models will just generate text as soon as you submit a prompt, whereas thinking models will have hidden “thinking tokens” where it basically tries to talk to itself (including talking itself out of saying silly non-true things). This makes them slower to start responding, but often makes their responses beter. As you play with them, you’ll probably notice that sometimes you don’t want to wait for a response, so they have their time and place.

(Note if you don’t see these models, ask us for access. Some are a bit pricey so they are hidden by default.)

What’s the best model?

What model is the best for you and for a given task in the end depends on you and your task! There are some benchmarks that try to compare models, but all the “headline” ones are pretty gamed, and real world usage depends on what you are doing.

Usage limits

There are currently no hard limits on usage. We’ve tagged models with the following:

  1. Cost: Dirt Cheap: these models are dirt cheap to run, you can use them even for silly things.
  2. Cost: Cheap: these models are pretty cheap, and you can basically use them as much as you need.
  3. Cost: Everyday: these models are cheap, but not dirt cheap. Feel free to use them for everyday tasks and as a default, but don’t ask it to generate 10 000 words of lorem ipsum or whatever.
  4. Cost: Expensive: these models are not cheap. You can play with them a bit, and use them if the cheaper models don’t do the trick. But don’t use them for everyday tasks if the cheaper ones do the job.
  5. Cost: Super Expensive: these models are not cheap. You can play with them a bit, and use them if the cheaper models don’t do the trick. But don’t use them for everyday tasks.

Privacy while using LLMs

Many people are wary of using LLMs because they are afraid their data will be misused by the companies building these models. Sometimes this is warranted such as with free providers that explicitly use your data for training (e.g. free ChatGPT, free Meta.ai, etc).

All providers available on our platform contractually agree not to use your chats for training their models, and to not store chat logs. You are not the product: we pay real money to use these APIs.

In our judgement, the LLMs through our platform are safe to use for normal everyday tasks, similar in vein to how you use other platforms that are not end-to-end encrypted (e.g. social media, email, etc). We use these LLMs for all kinds of tasks including those that involve personal information (names, addresses, details, etc). Still, LLMs are not end-to-end encrypted, and we recommend not using them for extremely sensitive matters (the kinds of things that you would not do outside end-to-end encrypted platforms like Signal).

Your chats are stored on our servers so that we can provide you the service. We will only ever use them to produce usage and cost statistics.