Skip to content

mozilla-ai/byota

Repository files navigation

Project logo

Build Your Own Timeline Algorithm

👉 MEANS Work In Progress

Timeline algorithms should be useful for people, not for companies. Their quality should not be evaluated in terms of how much more time people spend on a platform, but rather in terms of how well they serve their users’ purposes. Objectives might differ, from delving deeper into a topic to connecting with like-minded communities, solving a problem or just passing time until the bus arrives. How these objectives are reached might differ too, e.g. while respecting instances’ bandwidth, one’s own as well as others’ privacy, algorithm trustworthiness and software licenses.

This blueprint introduces an approach to personal, local timeline algorithms that people can either run out-of-the-box or customize.

A 2D scatterplot representing statuses from different Mastodon timelines (home, local, public, and tag/gopher). Some areas of the plot are labeled as geographical places in a map (e.g. "The AI peninsula", "The Billionaiers swamp", etc.) A 2D map of multiple timelines created with BYOTA (labels have been manually added).

Quick-start

Run the demo (no Mastodon account needed!):

  • Try on Spaces (👉 make sure link is ok)
  • locally with Docker:
    • docker run -it -p 8080:8080 -p 2718:2718 mzdotai/byota:latest demo.py
    • open a browser and connect to http://localhost:2718
    • when asked a password, enter byota

Run on your own timelines (Mastodon credentials required):

How it Works

BYOTA relies on a stack which makes use of Mastodon.py to get recent timeline data, llamafile to calculate post embeddings locally, and marimo to provide a UI that runs in one’s own browser. Using this stack, you can visualize, search, and re-rank posts from the fediverse without any of them leaving your computer.

Pre-requisites

  • System requirements:

    • OS: Windows, macOS, or Linux
    • Python 3.11 or higher
    • 👉 Minimum RAM: 1GB (double check)
    • Disk space: 1.3GB for the Docker image, or ~1GB for local installation (~800MB for code + deps, plus the embedding model of your choice). If you want to compile llamafile yourself, you'll need ~5GB extra (NOTE: the Docker image already contains it)
  • Dependencies:

    • Dependencies listed in pyproject.toml

Troubleshooting

The code is still experimental and will be subject to breaking updates in the next few weeks. Please be patient, raise issues, and check often for the latest updates! 🙇

License

This project is licensed under the Apache 2.0 License. See the LICENSE file for details.

Contributing

Contributions are welcome! To get started, you can check out the CONTRIBUTING.md file.

About

Blueprint to Build Your Own Timeline Algorithm

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •