Image Feature Extraction
Transformers
JAX
Safetensors
MLX
PyTorch
aimv2_vision_model
vision
custom_code
Instructions to use apple/aimv2-large-patch14-native with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use apple/aimv2-large-patch14-native with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-feature-extraction", model="apple/aimv2-large-patch14-native", trust_remote_code=True)# Load model directly from transformers import AutoImageProcessor, AutoModel processor = AutoImageProcessor.from_pretrained("apple/aimv2-large-patch14-native", trust_remote_code=True) model = AutoModel.from_pretrained("apple/aimv2-large-patch14-native", trust_remote_code=True) - MLX
How to use apple/aimv2-large-patch14-native with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir aimv2-large-patch14-native apple/aimv2-large-patch14-native
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
| { | |
| "do_center_crop": false, | |
| "do_convert_rgb": true, | |
| "do_normalize": true, | |
| "do_rescale": true, | |
| "do_resize": false, | |
| "image_mean": [ | |
| 0.48145466, | |
| 0.4578275, | |
| 0.40821073 | |
| ], | |
| "image_processor_type": "CLIPImageProcessor", | |
| "image_std": [ | |
| 0.26862954, | |
| 0.26130258, | |
| 0.27577711 | |
| ], | |
| "resample": 3, | |
| "rescale_factor": 0.00392156862745098, | |
| "size": { | |
| "shortest_edge": 224 | |
| } | |
| } | |