- Allow conversing with user using GPT's contextually aware, generative capability
- Extract metadata, user intent from user's messages using GPT's general understanding
- This may help reproduce test failures seen on Github locally
- Interfaces shouldn't break within bounds of minor version updates of
dependent packages, assuming their following semantic versioning
- Move search config fixture to conftests.py to be shared across tests
- Move image search type specific tests to test_image_search.py file
- Move, create asymmetric search type specific tests in new file
Details
- Rename method query_* to query in search_types for standardization
- Wrapping Config code in classes simplified mocking test config
- Reduce args beings passed to a function by passing it as single
argument wrapped in a class
- Minimize setup in main.py:__main__. Put most of it into functions
These functions can be mocked if required in tests later too
Setup Flow:
CLI_Args|Config_YAML -> (Text|Image)SearchConfig -> (Text|Image)SearchModel
- Wrap Image, Music, Ledger search into the type of SearchModel they use
Similar to what was done for notes model by wrapping it's config
into an AsymmetricSearchModel.
- Use the uber wrapper class to expose all type specific search models
- Wrap asymmetric search model parameters into AsymmetricSearchModel class
- Create wrapper for all search type models. Put notes search model into it
- Test notes search end-to-end from client API layer to results.
Use model build on test data
- Cleaner, more idiomatic usage of a global variable
- Simplifies mocking when testing client in pytest as setting wrapped
in object rather than a simple type. So passed around by reference
- Use a SearchType to limit types that can be passed by user
- FastAPI automatically validates type passed in query param
- Available type options show up in Swagger UI, FastAPI docs
- controller code looks neater instead of doing string comparisons for type
- Test invalid, valid search types via pytest
- Break the compute embeddings method into separate methods:
compute_image_embeddings and compute_metadata_embeddings
- If image_metadata_embeddings isn't defined, do not use it to enhance
search results. Given image_metadata_embeddings wouldn't be defined
if use_xmp_metadata is False, we can avoid unnecessary addition of
args to query method
- Issue:
Process would get killed while encoding images
for consuming too much memory
- Fix:
- Encode images in batches and append to image_embeddings
- No need to use copy or deep_copy anymore with batch processing.
It would earlier throw too many files open error
Other Changes:
- Use tqdm to see progress even when using batch
- See progress bar of encoding independent of verbosity (for now)
- Details
- The CLIP model can represent images, text in the same vector space
- Enhance CLIP's image understanding by augmenting the plain image
with it's text based metadata.
Specifically with any subject, description XMP tags on the image
- Improve results by combining plain image similarity score with
metadata similarity scores for the highest ranked images
- Minor Fixes
- Convert verbose to integer from bool in image_search.
It's already passed as integer from the main program entrypoint
- Process images with ".jpeg" extensions too
- Previously:
The text the model was trained on was being used to
re-create a semblance of the original org-mode entry.
- Now:
- Store raw entry as another key:value in each entry json too
Only return actual raw org entries in results
But create embeddings like before
- Also add link to entry in file:<filename>::<line_number> form
in property drawer of returned results
This can be used to jump to actual entry in it's original file