# Incoming
## Major
### Fix Prompt Size Exceeded Issue
- Fix issues related to prompt size, Closes#386. Use the correct tokenizer to calculate whether the input needs to be truncated or not.
### Improve Llama 2 Model Download
- Use the correct download link for LlamaV2 -- should have been using the small model, but was using the medium
- Add better downloading logic to retry download if it failed, Closes#379
### Fix Segmentation Fault due to Race
- Add a lock around generating chat responses from the offline model to avoid segmentation faults. Closes#367.
- Add a loading symbol to the web chat UI when the model is thinking. Closes#392
### Improve Chat Response Latency
- Improve performance of offline chat by increasing batch size (via `n_batch`) to automatically engage more cores/GPU, using smaller model and fixing prompt vs response token generation numbers. Closes#363
### Fix Fake Dialogue Continuation
- Fix formatting of user query with offline chat, this was contributing to #398
- Stop Llama 2 from Creating Fake Dialogue Continuations. Closes#398
## Minor
- Improve default message for Chat window on web when it's not configured. Include hint to use offline chat.
- Add null check in `perform_chat_checks` method
- Add offline chat director unit tests
## Performance Analysis (Time to First Token)
| | v0.10.0 | this branch |
|-|-|-|
| Query 1 | 52s | 28s |
| Query 2 | 33s| 42s |
| Query 3 | 67s| 38s|
It would previously some times start generating fake dialogue with
it's internal prompt patterns of <s>[INST] in responses.
This is a jarring experience. Stop generation response when hit <s>
Resolves#398
- Use same batch_size in extract question actor as the chat actor
- Log final location the chat model is to be stored in, instead of
it's temp filename while it is being downloaded
- Fix download url -- was mapping to q3_K_M, but fixed to use q4_K_S
- Use a proper Llama Tokenizer for counting tokens for truncation with Llama
- Add additional null checks when running
Previously the system message was getting dropped when the context
size with chat history would be more than the max prompt size
supported by the cat model
Now only the previous chat messages are dropped or the current
message is truncated but the system message is kept to provide
guidance to the chat model
- Screenshots of khoj search, chat
- Put quickstart on landing page
- Put miscellaneous pages under separate section
- Move credits to separate page under miscellaneous