Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chroma Similarity Backend Integration #4520

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

BigCoop
Copy link

@BigCoop BigCoop commented Jun 21, 2024

What changes are proposed in this pull request?

builds out an integration to facilitate using chromaDB as a backend for the similarity functionality.

How is this patch tested? If it is not, please explain why.

Trying to do that right now ~ it has passed unit tests on my end but I can't get fiftyone to import the module to utilize some tests from @brimoor .

Release Notes

Is this a user-facing change that should be mentioned in the release notes?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release
    notes for FiftyOne users.

(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)
Chroma can now be used as a backend for the image similarity function.

What areas of FiftyOne does this PR affect?

  • App: FiftyOne application changes
  • Build: Build and test infrastructure changes
  • Core: Core fiftyone Python library changes
  • Documentation: FiftyOne documentation changes
  • Other

Summary by CodeRabbit

  • New Features

    • Introduced support for ChromaDB similarity backend to enable similarity search based on embeddings.
  • Tests

    • Added test cases for image and patch similarity to ensure functionality and robustness.

Copy link
Contributor

coderabbitai bot commented Jun 21, 2024

Walkthrough

The updates enhance FiftyOne with a new backend called ChromaDB for similarity search. The changes introduce classes and methods to manage configurations, compute similarities, and interact with similarity indexes. Additionally, testing functionalities are integrated to validate image and patch similarity processes.

Changes

File Path Change Summary
fiftyone/utils/chroma_fiftyone.py Introduced classes and methods for ChromaDB interaction: ChromaSimilarityConfig, ChromaSimilarity, and ChromaSimilarityIndex. Enhanced similarity search within FiftyOne.
fiftyone/utils/tests_ch.py Added test cases (test_image_similarity_backend, test_patch_similarity_backend) and dataset fixture. Enabled testing of ChromaDB similarity functionalities.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant FiftyOne
    participant ChromaSimilarityConfig
    participant ChromaSimilarity
    participant ChromaSimilarityIndex

    User->>FiftyOne: Initialize ChromaDB Similarity
    FiftyOne->>ChromaSimilarityConfig: Load configuration
    ChromaSimilarityConfig->>FiftyOne: Return config object
    FiftyOne->>ChromaSimilarity: Initialize with config
    FiftyOne->>ChromaSimilarityIndex: Create/Manage collection
    User->>FiftyOne: Perform similarity search
    FiftyOne->>ChromaSimilarityIndex: Query collection with embeddings
    ChromaSimilarityIndex->>FiftyOne: Return similarity results
    FiftyOne->>User: Display results
Loading

Poem

In fields of code where data roam,
A bunny builds a Chroma home.
With lines of logic, a search so bright,
Finding likeness in pixel light.
Backends align, precision prime,
Hippity-hop, it's Chroma time! 🐰✨

Warning

Review ran into problems

Problems (1)
  • Git: Failed to clone repository. Please contact CodeRabbit support.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@BigCoop BigCoop changed the title chroma similarity backend integration ~ having problems getting the t… Chroma Similarity Backend Integration Jun 21, 2024
@BigCoop
Copy link
Author

BigCoop commented Jun 21, 2024

I don't know where to put the file to have it be imported after I have setup the brain_config.json (as attached). Any advice would be appreciated.
brain_config.json

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

Commits

Files that changed from the base of the PR and between d095b22 and 9dd272b.

Files selected for processing (2)
  • fiftyone/utils/chroma_fiftyone.py (1 hunks)
  • fiftyone/utils/tests_ch.py (1 hunks)
Additional comments not posted (5)
fiftyone/utils/tests_ch.py (2)

1-4: Approved import statements.

The imports are well-organized and relevant to the test functionalities being implemented.


6-10: Verify the dataset deletion in the fixture.

The fixture dataset correctly sets up and tears down the dataset, but ensure that the deletion of the dataset does not affect other tests or data that should be retained.

#!/bin/bash
# Description: Verify that dataset deletion in the fixture does not affect other tests or necessary data.

# Test: Search for other usages of the dataset. Expect: No adverse effects on other tests.
rg --type python $'dataset.delete()'
fiftyone/utils/chroma_fiftyone.py (3)

1-14: Approved initial setup and imports.

The initial setup and imports are correctly structured for the functionality of the ChromaDB integration.


122-494: Optimize and secure ChromaSimilarityIndex interactions.

  1. The error handling in _initialize and other methods should be robust against various failure modes.
  2. Ensure that the handling of embeddings and sample IDs is secure and efficient, especially in the context of batch operations.

[REFACTOR_SUGGESTion]

- self._client=chromadb.HttpClient(host=self.config._url, ssl=False)
+ self._client=chromadb.HttpClient(host=self.config._url, ssl=True)  # Enable SSL for security

111-117: Ensure proper package requirements in ChromaSimilarity.

Verify that the chromadb package is correctly ensured for both requirements methods to prevent runtime errors.

#!/bin/bash
# Description: Verify the presence and correct version of the `chromadb` package.

# Test: Check package installation and version.
pip show chromadb

Comment on lines +12 to +59
def test_image_similarity_backend(dataset):
backend = "chroma"
prompt = "kites high in the air"
brain_key = "clip_" + backend

index = fob.compute_similarity(
dataset,
model="clip-vit-base32-torch",
metric="euclidean",
embeddings=False,
backend=backend,
brain_key=brain_key,
)

embeddings, sample_ids, _ = index.compute_embeddings(dataset)

index.add_to_index(embeddings, sample_ids)
assert index.total_index_size == 200
assert index.index_size == 200
assert index.missing_size is None

sim_view = dataset.sort_by_similarity(prompt, k=10, brain_key=brain_key)
assert len(sim_view) == 10

del index
dataset.clear_cache()

assert dataset.get_brain_info(brain_key) is not None

index = dataset.load_brain_results(brain_key)
assert index.total_index_size == 200

embeddings2, sample_ids2, _ = index.get_embeddings()
assert embeddings2.shape == (200, 512)
assert sample_ids2.shape == (200,)

ids = sample_ids2[:100]
embeddings2, sample_ids2, _ = index.get_embeddings(sample_ids=ids)
assert embeddings2.shape == (100, 512)
assert sample_ids2.shape == (100,)

index.remove_from_index(sample_ids=ids)

assert index.total_index_size == 100

index.cleanup()
dataset.delete_brain_run(brain_key)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix logical issues in test_image_similarity_backend.

  1. The assert statements need to check against dynamic values rather than hard-coded ones to be more robust and adaptable to changes in dataset size or configuration.
  2. The deletion of the index and clearing of the cache should be verified to ensure they do not unintentionally affect other parts of the application.
- assert index.total_index_size == 200
- assert index.index_size == 200
+ # Suggested to replace with dynamic checks based on expected dataset configurations.

Committable suggestion was skipped due to low confidence.

Comment on lines +60 to +111
def test_patch_similarity_backend(dataset):
backend = "chroma"
view = dataset.to_patches("ground_truth")

prompt = "cute puppies"
brain_key = "gt_clip_" + backend

index = fob.compute_similarity(
dataset,
patches_field="ground_truth",
model="clip-vit-base32-torch",
metric="euclidean",
embeddings=False,
backend=backend,
brain_key=brain_key,
)

embeddings, sample_ids, label_ids = index.compute_embeddings(dataset)

index.add_to_index(embeddings, sample_ids, label_ids=label_ids)
assert index.total_index_size == 1232
assert index.index_size == 1232
assert index.missing_size is None

sim_view = view.sort_by_similarity(prompt, k=10, brain_key=brain_key)
assert len(sim_view) == 10

del index
dataset.clear_cache()

assert dataset.get_brain_info(brain_key) is not None

index = dataset.load_brain_results(brain_key)
assert index.total_index_size == 1232

embeddings2, sample_ids2, label_ids2 = index.get_embeddings()
assert embeddings2.shape == (1232, 512)
assert sample_ids2.shape == (1232,)
assert label_ids2.shape == (1232,)

ids = label_ids2[:100]
embeddings2, sample_ids2, label_ids2 = index.get_embeddings(label_ids=ids)
assert embeddings2.shape == (100, 512)
assert sample_ids2.shape == (100,)
assert label_ids2.shape == (100,)

index.remove_from_index(label_ids=ids)

assert index.total_index_size == 1132

index.cleanup()
dataset.delete_brain_run(brain_key)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review and optimize test_patch_similarity_backend.

  1. Similar to test_image_similarity_backend, replace hard-coded assert values with dynamic checks.
  2. Verify that the deletion and cache clearing operations are isolated and do not impact other functionalities.
- assert index.total_index_size == 1232
+ # Replace with dynamic values based on dataset configurations.

Committable suggestion was skipped due to low confidence.

Comment on lines +22 to +104
class ChromaSimilarityConfig(SimilarityConfig):
"""Configuration for the ChromaDB similarity backend.

Args:
embeddings_field (None): the sample field containing the embeddings,
if one was provided
model (None): the :class:`fiftyone.core.models.Model` or name of the
zoo model that was used to compute embeddings, if known
patches_field (None): the sample field defining the patches being
analyzed, if any
supports_prompts (None): whether this run supports prompt queries
collection_name (None): the name of a ChromaDB collection to use or
create. If none is provided, a new collection will be created
metric (None): the embedding distance metric to use when creating a
new index. Supported values are
``("cosine", "dotproduct", "euclidean")``
url (None): a ChromaDB server URL to use
"""

def __init__(
self,
embeddings_field=None,
model=None,
patches_field=None,
supports_prompts=None,
collection_name=None,
metric=None,
url=None,
settings=None,
**kwargs,
):
if metric is not None and metric not in _SUPPORTED_METRICS:
raise ValueError(
"Unsupported metric '%s'. Supported values are %s"
% (metric, tuple(_SUPPORTED_METRICS.keys()))
)

super().__init__(
embeddings_field=embeddings_field,
model=model,
patches_field=patches_field,
supports_prompts=supports_prompts,
**kwargs,
)

self.collection_name = collection_name
self.metric = metric

# store privately so these aren't serialized
self._url = url

@property
def method(self):
return "chromadb"

@property
def url(self):
return self._url

@url.setter
def url(self, value):
self._url = value

@property
def max_k(self):
return None

@property
def supports_least_similarity(self):
return False

@property
def supported_aggregations(self):
return ("mean",)

def load_credentials(
self, url=None
):
self._load_parameters(
url=url,
)

class ChromaSimilarity(Similarity):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review and refine ChromaSimilarityConfig.

  1. Ensure that the error message for unsupported metrics is clear and actionable.
  2. Verify the URL handling to ensure it supports various ChromaDB configurations.
- raise ValueError("Unsupported metric '%s'. Supported values are %s" % (metric, tuple(_SUPPORTED_METRICS.keys())))
+ raise ValueError(f"Unsupported metric '{metric}'. Supported values are {tuple(_SUPPORTED_METRICS.keys())}")
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class ChromaSimilarityConfig(SimilarityConfig):
"""Configuration for the ChromaDB similarity backend.
Args:
embeddings_field (None): the sample field containing the embeddings,
if one was provided
model (None): the :class:`fiftyone.core.models.Model` or name of the
zoo model that was used to compute embeddings, if known
patches_field (None): the sample field defining the patches being
analyzed, if any
supports_prompts (None): whether this run supports prompt queries
collection_name (None): the name of a ChromaDB collection to use or
create. If none is provided, a new collection will be created
metric (None): the embedding distance metric to use when creating a
new index. Supported values are
``("cosine", "dotproduct", "euclidean")``
url (None): a ChromaDB server URL to use
"""
def __init__(
self,
embeddings_field=None,
model=None,
patches_field=None,
supports_prompts=None,
collection_name=None,
metric=None,
url=None,
settings=None,
**kwargs,
):
if metric is not None and metric not in _SUPPORTED_METRICS:
raise ValueError(
"Unsupported metric '%s'. Supported values are %s"
% (metric, tuple(_SUPPORTED_METRICS.keys()))
)
super().__init__(
embeddings_field=embeddings_field,
model=model,
patches_field=patches_field,
supports_prompts=supports_prompts,
**kwargs,
)
self.collection_name = collection_name
self.metric = metric
# store privately so these aren't serialized
self._url = url
@property
def method(self):
return "chromadb"
@property
def url(self):
return self._url
@url.setter
def url(self, value):
self._url = value
@property
def max_k(self):
return None
@property
def supports_least_similarity(self):
return False
@property
def supported_aggregations(self):
return ("mean",)
def load_credentials(
self, url=None
):
self._load_parameters(
url=url,
)
class ChromaSimilarity(Similarity):
raise ValueError(
f"Unsupported metric '{metric}'. Supported values are {tuple(_SUPPORTED_METRICS.keys())}"
)

@BigCoop BigCoop mentioned this pull request Jun 21, 2024
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant