Skip to main content
Archivist™ Windows

Software Documentation

← Back to Archivist

Test Local AI on Your Hardware

Archivist is one of the easiest ways to see what local AI can do on your computer. Download the installer, run it, and you're chatting with an AI model in minutes — no coding, no terminal commands, no package managers, no configuration files.

One Installer, Everything Included

Most local AI tools require you to install Python, download libraries, configure paths, and troubleshoot compatibility issues. Archivist skips all of that.

  1. Install from the Microsoft Store
  2. Launch the app
  3. Download the AI models when prompted (~1.7 GB, one time)
  4. Start chatting

That's it. The installer includes everything the app needs. The AI models download on first run and are stored locally for reuse.

See Your Hardware in Action

Once you start chatting, Archivist shows you real-time performance metrics right in the status bar:

  • Tokens per second — How fast the AI generates text on your hardware. This updates after every response so you can see your actual throughput.
  • Extraction speed — When converting documents, you'll see MB/s throughput showing how quickly your machine processes files.

These numbers give you a concrete sense of what your computer can handle with local AI.

Experiment with Settings

The AI Settings tab lets you tweak parameters and immediately see the effect:

  • Context length — Try 4,096 vs. 8,192 vs. 16,384 tokens and see how it affects response speed and memory usage.
  • Temperature — Slide from focused (0.1) to creative (1.0+) and compare the responses.
  • Retrieval settings — Adjust how many document passages the AI considers and how strictly they need to match.

Change a setting, ask the same question again, and compare. It's a hands-on way to understand what these parameters actually do.

Why This Matters

If you're considering local AI for your work — whether for privacy, cost savings, or offline access — the first question is usually: "Will it actually run well on my machine?"

Archivist answers that question in minutes instead of hours. No setup headaches, no troubleshooting dependencies, no reading documentation about GPU drivers. Just install, run, and see the results.

And if you decide local AI works for you, everything you set up in Archivist — your documents, your passages, your organization — is ready to use immediately. It's not a throwaway demo; it's a fully functional tool.

Try Larger Models

With a paid license, you can upload your own AI models in GGUF format. This lets you test how different model sizes perform on your hardware:

  • Does a 7B parameter model run fast enough for your workflow?
  • Can your GPU handle a 13B model?
  • Is the quality difference worth the speed trade-off?

Archivist makes it easy to swap models and compare, without rebuilding anything.

© 2024–2026 Integral Business Intelligence. Archivist™, Interchange™, and Sentinels™ are trademarks of Integral Business Intelligence.

Website design and development by Integral Business Intelligence with assistance from AI.