Skip to content

Remove efficientnet-pytorch dependency #1018

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 21 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
<div align="center">
![logo](https://i.ibb.co/dc1XdhT/Segmentation-Models-V2-Side-1-1.png)
**Python library with Neural Networks for Image
Segmentation based on [PyTorch](https://pytorch.org/).**

[![Generic badge](https://img.shields.io/badge/License-MIT-<COLOR>.svg?style=for-the-badge)](https://github.com/qubvel/segmentation_models.pytorch/blob/main/LICENSE)
[![GitHub Workflow Status (branch)](https://img.shields.io/github/actions/workflow/status/qubvel/segmentation_models.pytorch/tests.yml?branch=main&style=for-the-badge)](https://github.com/qubvel/segmentation_models.pytorch/actions/workflows/tests.yml)
[![Read the Docs](https://img.shields.io/readthedocs/smp?style=for-the-badge&logo=readthedocs&logoColor=white)](https://smp.readthedocs.io/en/latest/)

![logo](https://i.ibb.co/dc1XdhT/Segmentation-Models-V2-Side-1-1.png)
**Python library with Neural Networks for Image
Segmentation based on [PyTorch](https://pytorch.org/).**

[![Generic badge](https://img.shields.io/badge/License-MIT-<COLOR>.svg?style=for-the-badge)](https://github.com/qubvel/segmentation_models.pytorch/blob/main/LICENSE)
[![GitHub Workflow Status (branch)](https://img.shields.io/github/actions/workflow/status/qubvel/segmentation_models.pytorch/tests.yml?branch=main&style=for-the-badge)](https://github.com/qubvel/segmentation_models.pytorch/actions/workflows/tests.yml)
[![Read the Docs](https://img.shields.io/readthedocs/smp?style=for-the-badge&logo=readthedocs&logoColor=white)](https://smp.readthedocs.io/en/latest/)
<br>
[![PyPI](https://img.shields.io/pypi/v/segmentation-models-pytorch?color=blue&style=for-the-badge&logo=pypi&logoColor=white)](https://pypi.org/project/segmentation-models-pytorch/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/segmentation-models-pytorch?style=for-the-badge&color=blue)](https://pepy.tech/project/segmentation-models-pytorch)
[![PyPI](https://img.shields.io/pypi/v/segmentation-models-pytorch?color=blue&style=for-the-badge&logo=pypi&logoColor=white)](https://pypi.org/project/segmentation-models-pytorch/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/segmentation-models-pytorch?style=for-the-badge&color=blue)](https://pepy.tech/project/segmentation-models-pytorch)
<br>
[![PyTorch - Version](https://img.shields.io/badge/PYTORCH-1.4+-red?style=for-the-badge&logo=pytorch)](https://pepy.tech/project/segmentation-models-pytorch)
[![Python - Version](https://img.shields.io/badge/PYTHON-3.9+-red?style=for-the-badge&logo=python&logoColor=white)](https://pepy.tech/project/segmentation-models-pytorch)
[![PyTorch - Version](https://img.shields.io/badge/PYTORCH-1.4+-red?style=for-the-badge&logo=pytorch)](https://pepy.tech/project/segmentation-models-pytorch)
[![Python - Version](https://img.shields.io/badge/PYTHON-3.9+-red?style=for-the-badge&logo=python&logoColor=white)](https://pepy.tech/project/segmentation-models-pytorch)

</div>

Expand All @@ -23,7 +23,7 @@ The main features of this library are:
- 124 available encoders (and 500+ encoders from [timm](https://github.com/rwightman/pytorch-image-models))
- All encoders have pre-trained weights for faster and better convergence
- Popular metrics and losses for training routines

### [📚 Project Documentation 📚](http://smp.readthedocs.io/)

Visit [Read The Docs Project Page](https://smp.readthedocs.io/) or read the following README to know more about Segmentation Models Pytorch (SMP for short) library
Expand Down Expand Up @@ -55,7 +55,7 @@ The segmentation model is just a PyTorch `torch.nn.Module`, which can be created
import segmentation_models_pytorch as smp

model = smp.Unet(
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or timm-efficientnet-b7
encoder_weights="imagenet", # use `imagenet` pre-trained weights for encoder initialization
in_channels=1, # model input channels (1 for gray-scale images, 3 for RGB, etc.)
classes=3, # model output channels (number of classes in your dataset)
Expand Down Expand Up @@ -277,14 +277,6 @@ The following is a list of supported encoders in the SMP. Select the appropriate

|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|efficientnet-b0 |imagenet |4M |
|efficientnet-b1 |imagenet |6M |
|efficientnet-b2 |imagenet |7M |
|efficientnet-b3 |imagenet |10M |
|efficientnet-b4 |imagenet |17M |
|efficientnet-b5 |imagenet |28M |
|efficientnet-b6 |imagenet |40M |
|efficientnet-b7 |imagenet |63M |
|timm-efficientnet-b0 |imagenet / advprop / noisy-student|4M |
|timm-efficientnet-b1 |imagenet / advprop / noisy-student|6M |
|timm-efficientnet-b2 |imagenet / advprop / noisy-student|7M |
Expand Down Expand Up @@ -361,7 +353,7 @@ The following is a list of supported encoders in the SMP. Select the appropriate

Backbone from SegFormer pretrained on Imagenet! Can be used with other decoders from package, you can combine Mix Vision Transformer with Unet, FPN and others!

Limitations:
Limitations:

- encoder is **not** supported by Linknet, Unet++
- encoder is supported by FPN only for encoder **depth = 5**
Expand Down Expand Up @@ -423,18 +415,18 @@ Total number of supported encoders: 549
##### Input channels
Input channels parameter allows you to create models, which process tensors with arbitrary number of channels.
If you use pretrained weights from imagenet - weights of first convolution will be reused. For
1-channel case it would be a sum of weights of first convolution layer, otherwise channels would be
1-channel case it would be a sum of weights of first convolution layer, otherwise channels would be
populated with weights like `new_weight[:, i] = pretrained_weight[:, i % 3]` and than scaled with `new_weight * 3 / new_in_channels`.
```python
model = smp.FPN('resnet34', in_channels=1)
mask = model(torch.ones([1, 1, 64, 64]))
```

##### Auxiliary classification output
All models support `aux_params` parameters, which is default set to `None`.
##### Auxiliary classification output
All models support `aux_params` parameters, which is default set to `None`.
If `aux_params = None` then classification auxiliary output is not created, else
model produce not only `mask`, but also `label` output with shape `NC`.
Classification head consists of GlobalPooling->Dropout(optional)->Linear->Activation(optional) layers, which can be
Classification head consists of GlobalPooling->Dropout(optional)->Linear->Activation(optional) layers, which can be
configured by `aux_params` as follows:
```python
aux_params=dict(
Expand Down Expand Up @@ -472,7 +464,7 @@ $ pip install git+https://github.com/qubvel/segmentation_models.pytorch

### 🤝 Contributing

#### Install SMP
#### Install SMP

```bash
make install_dev # create .venv, install SMP in dev mode
Expand All @@ -484,7 +476,7 @@ make install_dev # create .venv, install SMP in dev mode
make fixup # Ruff for formatting and lint checks
```

#### Update table with encoders
#### Update table with encoders

```bash
make table # generate a table with encoders and print to stdout
Expand Down
1 change: 0 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,6 @@ def get_version():
"PIL",
"pretrainedmodels",
"torchvision",
"efficientnet-pytorch",
"segmentation_models_pytorch.encoders",
"segmentation_models_pytorch.utils",
# 'segmentation_models_pytorch.base',
Expand Down
16 changes: 0 additions & 16 deletions docs/encoders.rst
Original file line number Diff line number Diff line change
Expand Up @@ -215,22 +215,6 @@ EfficientNet
+------------------------+--------------------------------------+-------------+
| Encoder | Weights | Params, M |
+========================+======================================+=============+
| efficientnet-b0 | imagenet | 4M |
+------------------------+--------------------------------------+-------------+
| efficientnet-b1 | imagenet | 6M |
+------------------------+--------------------------------------+-------------+
| efficientnet-b2 | imagenet | 7M |
+------------------------+--------------------------------------+-------------+
| efficientnet-b3 | imagenet | 10M |
+------------------------+--------------------------------------+-------------+
| efficientnet-b4 | imagenet | 17M |
+------------------------+--------------------------------------+-------------+
| efficientnet-b5 | imagenet | 28M |
+------------------------+--------------------------------------+-------------+
| efficientnet-b6 | imagenet | 40M |
+------------------------+--------------------------------------+-------------+
| efficientnet-b7 | imagenet | 63M |
+------------------------+--------------------------------------+-------------+
| timm-efficientnet-b0 | imagenet / advprop / noisy-student | 4M |
+------------------------+--------------------------------------+-------------+
| timm-efficientnet-b1 | imagenet / advprop / noisy-student | 6M |
Expand Down
4 changes: 2 additions & 2 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@
Segmentation model is just a PyTorch nn.Module, which can be created as easy as:

.. code-block:: python

import segmentation_models_pytorch as smp

model = smp.Unet(
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or timm-efficientnet-b7
encoder_weights="imagenet", # use `imagenet` pre-trained weights for encoder initialization
in_channels=1, # model input channels (1 for gray-scale images, 3 for RGB, etc.)
classes=3, # model output channels (number of classes in your dataset)
Expand Down
1 change: 0 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ classifiers = [
'Programming Language :: Python :: Implementation :: PyPy',
]
dependencies = [
'efficientnet-pytorch>=0.6.1',
'huggingface-hub>=0.24',
'numpy>=1.19.3',
'pillow>=8',
Expand Down
1 change: 0 additions & 1 deletion requirements/minimum.old
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
efficientnet-pytorch==0.6.1
huggingface-hub==0.24.0
numpy==1.19.3
pillow==8.0.0
Expand Down
1 change: 0 additions & 1 deletion requirements/required.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
efficientnet-pytorch==0.7.1
huggingface_hub==0.27.0
numpy==2.2.1
pillow==11.0.0
Expand Down
2 changes: 0 additions & 2 deletions segmentation_models_pytorch/encoders/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@
from .densenet import densenet_encoders
from .inceptionresnetv2 import inceptionresnetv2_encoders
from .inceptionv4 import inceptionv4_encoders
from .efficientnet import efficient_net_encoders
from .mobilenet import mobilenet_encoders
from .xception import xception_encoders
from .timm_efficientnet import timm_efficientnet_encoders
Expand All @@ -34,7 +33,6 @@
encoders.update(densenet_encoders)
encoders.update(inceptionresnetv2_encoders)
encoders.update(inceptionv4_encoders)
encoders.update(efficient_net_encoders)
encoders.update(mobilenet_encoders)
encoders.update(xception_encoders)
encoders.update(timm_efficientnet_encoders)
Expand Down
177 changes: 0 additions & 177 deletions segmentation_models_pytorch/encoders/efficientnet.py

This file was deleted.

17 changes: 0 additions & 17 deletions tests/encoders/test_smp_encoders.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,20 +22,3 @@ class TestMixTransformerEncoder(base.BaseEncoderTester):
if not RUN_ALL_ENCODERS
else ["mit_b0", "mit_b1", "mit_b2", "mit_b3", "mit_b4", "mit_b5"]
)


class TestEfficientNetEncoder(base.BaseEncoderTester):
encoder_names = (
["efficientnet-b0"]
if not RUN_ALL_ENCODERS
else [
"efficientnet-b0",
"efficientnet-b1",
"efficientnet-b2",
"efficientnet-b3",
"efficientnet-b4",
"efficientnet-b5",
"efficientnet-b6",
# "efficientnet-b7", # extra large model
]
)
Loading