Login GEDRAG

Jobs
-- Pending -- Running
Files
-- Stored
Cache -- / --
Hit -- Entries -- Stores -- Evictions --
Idle Lexical -- Legal parser -- MCP --
MCP --
Active --
Drops --
Tool errors --
Vector jobs On
Auto-learning Off
GEDRAG AI Version 0.1.5

Files

    Upload

    Quick tag:

    Uploads are encrypted and vectorized automatically.

    ⬆ Drop files here or click to browse Supports multi-file selection. Use the checkbox above to enable directory upload.
    No files queued.

    No files queued.

    Chat / Threads

    Name Model Vector base Tags Domain Sources Ignored in fast mode

    Archived chats

    Select an archived chat to preview.

    Response annotations

    Review, edit, or delete feedback left on engine answers. Use the chat to add new annotations or to provide additional context.

    Internal command reference

    You can use these commands inside the application:

    • /tag <path/> <tag> – Add a tag to the directory and the files.
    • /untag <path/> <tag> – Remove a tag.
    • /tagq <tags>|<prompt to add tags> - Add tags with prompt.
    • /untagq <tags>|<prompt to remove tags> - Remove tags with prompt.
    • /ocr <path> – Extract texts from the image.

    Token usage per request

    Loading...
    Date Question User Method Path Models Prompt tok Output tok Total tok

    No metrics available yet.

    Answer quality tracker

    Loading…
    Timestamp Event Mode Docs Scores Refusal Reason Path Duration

    No quality events recorded yet.

    Vectorization

    Loading...

    Queue

    Created UUID Path Version Base Owner Progress Status Attempts Last update Next attempt Last error Source Actions

    No vectorization in progress.

    Recent history

    Date UUID Path Base Result Details Actions

    No recent history.

    Logs (append-only)

    Logs filtered for 
    Date UUID Stage Status Base Worker Info Message Path Source

    No events recorded.

    Vector store

    Filter: All bases
    Loading vector store stats…

    Stored points

    --

    Collections: --

    Raw vectors

    --

    Indexed: --

    Segments footprint

    --

    Payload indexes: --

    No vector collections detected.

    Collection Embedding Points Vectors Indexed Segments Payload idx Dimensions Updated

    Knowledge

    Idle
    Loading...

    Active tasks

    Created UUID Path Base Version Owner Status Attempts Last update Next attempt Last error Actions

    No active knowledge tasks.

    Recent history

    Date UUID Path Result Details Actions

    No recent history.

    Logs (append-only)

    Logs filtered for 
    Date UUID Stage Status Base Worker Message Details Path

    No logs recorded.

    Knowledge mindmap

    Visualise top domains, codes, and jurisdictions extracted from your knowledge base.

    Loading…
    No mindmap data available yet.
    Mindmap data will appear here once legal metadata is available.

    Legal insights

    Explore enriched legal metadata extracted from your knowledge base.

    Loading…

    No legal metadata available yet.

    Top domains
    Top codes
    Top jurisdictions
    Document Version Last enriched Effective date Domains Codes Jurisdictions Cross refs Changes

    Legal search – LexTutor

    Articles

    Jurisprudence

    Debug LexTutor
    No query yet.

    Lexical index

    Status

    Loading…
    Pending No status available.
    0% Ready threshold: --
    Total
    --
    Indexed
    --
    Pending
    --
    Errors
    --
    Snapshot: — Last indexed: — Commanded by: —
    Values refresh automatically.

    Actions

    Control the lexical worker in real time.

    No action pending.

    Logs

    Loading…
    Date Level Message Details Chunk
    No log entries available.

    Settings

    Managed in the server configuration file.


    LLM Providers

    Configure the providers and models available in chat. Add the API keys you want to use and pick your personal default model.

    Jump to the API Keys tab to add or update credentials for each provider.

    Used when generating document embeddings during uploads and enrichment.

    Embedding bases

    Manage global embedding servers from the admin table below. The active base drives every vectorization and knowledge job.

    Active base: -

    LexTutor maintenance

    Rebuilds the article → TF rulings index to reflect the latest ingestions.

    Embedding bases (admin)

    Create, activate, or delete embedding bases for all users.

    Include the protocol and optional path, for example https://localhost:11434/v1.

    Required when creating a base. Stored encrypted at rest.

    Store the API key for each provider you intend to use. Keys are encrypted per user and never shared with other accounts.

    Update embedding credentials

    Base: —

    Leave the API key empty to keep the existing secret.

    Adjust worker allocation

    Base: —

    Auto learning

    No active task at the moment.

    Click the indicator to load active tasks.

    Source

    Excerpt

    Asked question

    
                

    Expert analysis

    Stage details

    Message

    
                

    Create a folder

    Full path:

    Move file

    Current file:

    Target path:

    Delete this folder?

    This will permanently remove and all of its contents.

    This action cannot be undone. All nested files and subfolders will be removed.

    Move folder

    Current folder:

    Resulting path:

    Rename folder

    Current folder:

    Resulting path:

    Apply tags to folder

    Quick tag:

    Existing tags are preserved; selected legal presets replace previous legal categories.

    Revectorize file

    Queue a fresh embedding job for this document.

    Latest version Manual trigger

    Processing happens in the background. Progress updates will appear automatically once the job starts.

    Revectorize all your files

    This action will restart vectorization for every file in your account.

    It may take a while, and answer quality can degrade until revectorization finishes.

    Purge all vectors?

    This permanently deletes every vector chunk currently stored for your files across all embedding bases.

    • Vector search will return no results until revectorization finishes.
    • Running vectorization jobs will recreate fresh embeddings automatically.
    • Knowledge artifacts remain untouched.

    Use this when switching embedding models or before bulk revectorization. This cannot be undone.

    Clear knowledge enrichment jobs?

    Remove queued and historical enrichment jobs when the backlog gets stuck. Vector search remains available.

    • Running jobs are cancelled immediately.
    • Generated insights can be wiped to reclaim storage.
    • Use the scope selector to target only auto-learning if needed.

    This action cannot be undone. Legal insights and knowledge relations will be rebuilt only after new enrichment jobs run.

    Trigger auto-learning now?

    Launch a manual auto-learning pass. Eligible documents are scanned immediately and enrichment jobs are scheduled without waiting for the next cycle.

    Use this when you need fresh knowledge before the next automatic run.

    Reset auto-learning pipeline?

    This will purge existing knowledge artifacts, delete previously generated insights, and queue revectorization for every file before restarting auto-learning.

    Use this after major embedding or enrichment changes. The process may take time depending on your library size.

    Archive this conversation?

    The chat will be moved to Archived chats. You can restore it anytime from the dedicated tab.

    Selected chat