* Add a leader election mechanism to circumvent runtime issues for multiple schedulers
- Reduce the load on the DB and risk of issues on the service side by limiting the execution environment to one elected leader at a given time. This one is responsible for managing all of the execution of the jobs, though all workers are capable of adding and removing jobs
* Set a max duration for the schedule leader position (12 hrs), add some error if automation not added successfully
* rough sketch of desktop shortcuts. many bugs to fix still
* working MVP of desktop shortcut khoj
* UI fixes
* UI improvements for editable shortcut message
* major rendering fix to prevent clipboard text from getting lost
* UI improvements and bug fixes
* UI upgrades: custom top bar, edit sent message and color matching
* removed debug javascript file
* font reverted to Noto Sans
* cleaning up the code and removing diffs
* UX fixes
* cleaning up unused methods from html
* front end for button to send user back to main window to continue conversation
* UX fix for window and continue conversation support added
* migrated common js functions into chatutils.js
* Fix window closing issue in macos by
1. Use a helper function to determine if the window is open by seeing if there's a browser window with shortcut.html loaded
2. Use the event listener on the window to handle teardown
* removed extra comment and renamed continue convo button
---------
Co-authored-by: sabaimran <narmiabas@gmail.com>
- Add an experimental feature used for fact-checking falsifiable statements with customizable models. See attached screenshot for example. Once you input a statement that needs to be fact-checked, Khoj goes on a research spree to verify or refute it.
- Integrate frontend libraries for [Tailwind](https://tailwindcss.com/) and [ShadCN](https://ui.shadcn.com/) for easier UI development. Update corresponding styling for some existing UI components.
- Add component for model selection
- Add backend support for sharing arbitrary packets of data that will be consumed by specific front-end views in shareable scenarios
Initialize our migration to use Next.js for front-end views via Agents. This includes setup for getting authenticated users, reading in available agents, setting up a pop-up modal when you're clicking on an agent, and allowing users to start new conversations with agents.
Best attempt at an in-place migration, though there are some noticeable differences.
Also adds view for chat that are not being used, but in experimental phase.
Previously the text_to_image helper would only trigger the image
generation flow if OpenAI client was setup. This is not required
anymore as offline chat model + sd3 API works. So remove that check
- Add instructions for self-hosted users with info, warning boxes to
avoid, fix common issues when setting up Khoj server
- Create new Advanced Self Hosting section
- Extract Advanced Self-Hosting Sections from the Advanced Page and
move them to separate Pages under Advanced Self Hosting section
- Improve OpenAI Proxy Docs
- Put Ollama setup as a section under OpenAI API Proxy page instead
of a separate page
- Add Section to use Khoj with chat model from LM Studio
- Update LiteLLM docs to use chat model from LM Studio
This handles updates from manifest.json minAppVersion field to the
versions.json file.
The minAppVersion field is for the minimum Obsidian app version
supported by a Khoj plugin version
Khoj will find and display notes similar to the current entry in the side pane when
1. find similar is open in side pane and
2. cursor has moved to a new entry
### Major
- Find similar notes to current note at cursor automatically in background
- Only show headings of search result and increase default results count
### Minor
- Pass absolute path of file to index from khoj.el emacs client
- Update help message to only show the smaller set of new keybindings
- Fix edge cases in loading some chat sessions
To improve the developer experience for front-end development, we're migrating to Next.js. In order to do this migration page-by-page, we're using static site generation via Next.js. This also helps us avoid making cross site requests from front-end to back-end for the time being, while giving a ramp to separating out server and client if needed for scale down the road.
Dev instructions for using the next.js setup are in the added README.
This adds scaffolding for including the built files in the python package as well as the docker images. Docker setup has been tested locally. In order to verify the build is working as expected, we can navigate to the {khoj_host}:42110/experimental and verify that the experiment page comes up.
This setup works with serving static files included in the src/interface/web folder from the Django app. The key bit for understanding the setup is in the yarn export command in package.json.
- Just use in-built `npm version' command to update desktop, obsidian version
- Upgrade by major, minor or patch version using new -t flag in script
E.g bump_version -t minor
* Enable speech to text responses in khoj chat
- Current issue: reads out all the markdown formatting, plus waits for the whole result to be streamed before playing it
* Extract content from markdown-formatted text
* Add a loader for while you're waiting for Khoj's response
* Add user configuration option for chat model options, allow server side configuration for option list
* Join up APIs, views, admin pages to allow configuring custom voice models
When create new conversation session, automatically request query. As
that is expected next action after creating new session
Pass session-id to khoj-chat to allow reuse from
create-new-conversation func
When delete conversation session, do not call load chat session.
Unnecessary action.
Use thread-last to improve code flow in new, delete conversation funcs