Skip to content

Commit

Permalink
Merge pull request #7 from Navodplayer1/dev
Browse files Browse the repository at this point in the history
#3 added faster-whisper
  • Loading branch information
NavodPeiris committed Jan 22, 2024
2 parents a074e2d + de57772 commit d363894
Show file tree
Hide file tree
Showing 19 changed files with 497 additions and 300 deletions.
75 changes: 57 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,31 @@
</p>


installation:
### Requirements

* Python 3.8 or greater

### GPU execution

GPU execution needs CUDA 11.

GPU execution requires the following NVIDIA libraries to be installed:

* [cuBLAS for CUDA 11](https://developer.nvidia.com/cublas)
* [cuDNN 8 for CUDA 11](https://developer.nvidia.com/cudnn)

There are multiple ways to install these libraries. The recommended way is described in the official NVIDIA documentation, but we also suggest other installation methods below.

### Google Colab:

on google colab run this to install CUDA dependencies:
```
!apt install libcublas11
```

You can see this example [notebook]()

### installation:
```
pip install speechlib
```
Expand All @@ -30,18 +54,20 @@ This library contains following audio preprocessing functions:

3. re-encode the wav file to have 16-bit PCM encoding

Transcriptor method takes 5 arguments.
Transcriptor method takes 6 arguments.

1. file to transcribe

2. log_folder to store transcription

3. language used for transcribing
3. language used for transcribing (language code is used)

4. model size ("tiny", "medium", or "large")
4. model size ("tiny", "small", "medium", "large", "large-v1", "large-v2", "large-v3")

5. voices_folder (contains speaker voice samples for speaker recognition)

6. quantization: this determine whether to use int8 quantization or not. Quantization may speed up the process but lower the accuracy.

voices_folder should contain subfolders named with speaker names. Each subfolder belongs to a speaker and it can contain many voice samples. This will be used for speaker recognition to identify the speaker.

if voices_folder is not provided then speaker tags will be arbitrary.
Expand All @@ -55,13 +81,14 @@ transcript will also indicate the timeframe in seconds where each speaker speaks
```
from speechlib import Transcriptor
file = "obama1.wav"
file = "obama_zach.wav"
voices_folder = "voices"
language = "english"
language = "en"
log_folder = "logs"
modelSize = "medium"
quantization = False # setting this 'True' may speed up the process but lower the accuracy
transcriptor = Transcriptor(file, log_folder, language, modelSize, voices_folder)
transcriptor = Transcriptor(file, log_folder, language, modelSize, voices_folder, quantization)
res = transcriptor.transcribe()
Expand All @@ -70,20 +97,35 @@ res --> [["start", "end", "text", "speaker"], ["start", "end", "text", "speaker"

start: starting time of speech in seconds
end: ending time of speech in seconds
text: transcribed text for speech during start and end
text: transcribed text for speech during start and end
speaker: speaker of the text

voices_folder structure:
```
voices_folder
|---> person1
| |---> sample1.wav
| |---> sample2.wav
| ...
|
|---> person2
| |---> sample1.wav
| |---> sample2.wav
| ...
|--> ...
```

![voices_folder structure](voices_folder_structure1.png)

Generated transcript:
supported language codes:

![Transcript](transcript.png)
```
"af", "am", "ar", "as", "az", "ba", "be", "bg", "bn", "bo", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "es", "et", "eu", "fa", "fi", "fo", "fr", "gl", "gu", "ha", "haw", "he", "hi", "hr", "ht", "hu", "hy", "id", "is","it", "ja", "jw", "ka", "kk", "km", "kn", "ko", "la", "lb", "ln", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn","mr", "ms", "mt", "my", "ne", "nl", "nn", "no", "oc", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk","sl", "sn", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "uk", "ur", "uz","vi", "yi", "yo", "zh", "yue"
```

supported languages:
supported language names:

['english', 'chinese', 'german', 'spanish', 'russian', 'korean', 'french', 'japanese', 'portuguese', 'turkish', 'polish', 'catalan', 'dutch', 'arabic', 'swedish', 'italian', 'indonesian', 'hindi', 'finnish', 'vietnamese', 'hebrew', 'ukrainian', 'greek', 'malay', 'czech', 'romanian', 'danish', 'hungarian', 'tamil', 'norwegian', 'thai', 'urdu', 'croatian', 'bulgarian', 'lithuanian', 'latin', 'maori', 'malayalam', 'welsh', 'slovak', 'telugu', 'persian', 'latvian', 'bengali', 'serbian', 'azerbaijani', 'slovenian', 'kannada', 'estonian', 'macedonian', 'breton', 'basque', 'icelandic', 'armenian', 'nepali', 'mongolian', 'bosnian', 'kazakh', 'albanian', 'swahili', 'galician', 'marathi', 'punjabi', 'sinhala', 'khmer', 'shona', 'yoruba', 'somali', 'afrikaans', 'occitan', 'georgian', 'belarusian', 'tajik', 'sindhi', 'gujarati', 'amharic', 'yiddish', 'lao', 'uzbek', 'faroese', 'haitian creole', 'pashto', 'turkmen', 'nynorsk', 'maltese', 'sanskrit', 'luxembourgish', 'myanmar', 'tibetan', 'tagalog', 'malagasy', 'assamese', 'tatar', 'hawaiian', 'lingala', 'hausa', 'bashkir', 'javanese', 'sundanese', 'burmese', 'valencian', 'flemish', 'haitian', 'letzeburgesch', 'pushto', 'panjabi', 'moldavian', 'moldovan', 'sinhalese', 'castilian']
```
"Afrikaans", "Amharic", "Arabic", "Assamese", "Azerbaijani", "Bashkir", "Belarusian", "Bulgarian", "Bengali","Tibetan", "Breton", "Bosnian", "Catalan", "Czech", "Welsh", "Danish", "German", "Greek", "English", "Spanish","Estonian", "Basque", "Persian", "Finnish", "Faroese", "French", "Galician", "Gujarati", "Hausa", "Hawaiian","Hebrew", "Hindi", "Croatian", "Haitian", "Hungarian", "Armenian", "Indonesian", "Icelandic", "Italian", "Japanese","Javanese", "Georgian", "Kazakh", "Khmer", "Kannada", "Korean", "Latin", "Luxembourgish", "Lingala", "Lao","Lithuanian", "Latvian", "Malagasy", "Maori", "Macedonian", "Malayalam", "Mongolian", "Marathi", "Malay", "Maltese","Burmese", "Nepali", "Dutch", "Norwegian Nynorsk", "Norwegian", "Occitan", "Punjabi", "Polish", "Pashto","Portuguese", "Romanian", "Russian", "Sanskrit", "Sindhi", "Sinhalese", "Slovak", "Slovenian", "Shona", "Somali","Albanian", "Serbian", "Sundanese", "Swedish", "Swahili", "Tamil", "Telugu", "Tajik", "Thai", "Turkmen", "Tagalog","Turkish", "Tatar", "Ukrainian", "Urdu", "Uzbek", "Vietnamese", "Yiddish", "Yoruba", "Chinese", "Cantonese",
```

### Audio preprocessing example:

Expand All @@ -93,9 +135,7 @@ from speechlib import PreProcessor
file = "obama1.mp3"
# convert mp3 to wav
PreProcessor.mp3_to_wav(file)
wav_file = "obama1.wav"
wav_file = PreProcessor.convert_to_wav(file)
# convert wav file from stereo to mono
PreProcessor.convert_to_mono(wav_file)
Expand All @@ -108,5 +148,4 @@ This library uses following huggingface models:

#### https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb
#### https://huggingface.co/Ransaka/whisper-tiny-sinhala-20k-8k-steps-v2
#### https://huggingface.co/openai/whisper-medium
#### https://huggingface.co/pyannote/speaker-diarization
4 changes: 1 addition & 3 deletions examples/preprocess.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,7 @@
file = "obama1.mp3"

# convert mp3 to wav
PreProcessor.mp3_to_wav(file)

wav_file = "obama1.wav"
wav_file = PreProcessor.convert_to_wav(file)

# convert wav file from stereo to mono
PreProcessor.convert_to_mono(wav_file)
Expand Down
7 changes: 3 additions & 4 deletions examples/transcribe.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,11 @@

file = "obama_zach.wav"
voices_folder = "voices"
language = "english"
language = "en"
log_folder = "logs"
modelSize = "medium"
quantization = False # setting this 'True' may speed up the process but lower the accuracy

transcriptor = Transcriptor(file, log_folder, language, modelSize, voices_folder)
transcriptor = Transcriptor(file, log_folder, language, modelSize, voices_folder, quantization)

res = transcriptor.transcribe()

print("res", res)
62 changes: 47 additions & 15 deletions library.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,32 @@
installation:
### Requirements

* Python 3.8 or greater

### GPU execution

GPU execution needs CUDA 11.

GPU execution requires the following NVIDIA libraries to be installed:

* [cuBLAS for CUDA 11](https://developer.nvidia.com/cublas)
* [cuDNN 8 for CUDA 11](https://developer.nvidia.com/cudnn)

There are multiple ways to install these libraries. The recommended way is described in the official NVIDIA documentation, but we also suggest other installation methods below.

### Google Colab:

on google colab run this to install CUDA dependencies:
```
!apt install libcublas11
```

You can see this example [notebook]()

### installation:
```
pip install speechlib
```

This library does speaker diarization, speaker recognition, and transcription on a single wav file to provide a transcript with actual speaker names. This library will also return an array containing result information. ⚙

This library contains following audio preprocessing functions:
Expand All @@ -13,18 +37,20 @@ This library contains following audio preprocessing functions:

3. re-encode the wav file to have 16-bit PCM encoding

Transcriptor method takes 5 arguments.
Transcriptor method takes 6 arguments.

1. file to transcribe

2. log_folder to store transcription

3. language used for transcribing
3. language used for transcribing (language code is used)

4. model size ("tiny", "medium", or "large")
4. model size ("tiny", "small", "medium", "large", "large-v1", "large-v2", "large-v3")

5. voices_folder (contains speaker voice samples for speaker recognition)

6. quantization: this determine whether to use int8 quantization or not. Quantization may speed up the process but lower the accuracy.

voices_folder should contain subfolders named with speaker names. Each subfolder belongs to a speaker and it can contain many voice samples. This will be used for speaker recognition to identify the speaker.

if voices_folder is not provided then speaker tags will be arbitrary.
Expand All @@ -38,13 +64,14 @@ transcript will also indicate the timeframe in seconds where each speaker speaks
```
from speechlib import Transcriptor
file = "obama1.wav"
file = "obama_zach.wav"
voices_folder = "voices"
language = "english"
language = "en"
log_folder = "logs"
modelSize = "medium"
quantization = False # setting this 'True' may speed up the process but lower the accuracy
transcriptor = Transcriptor(file, log_folder, language, modelSize, voices_folder)
transcriptor = Transcriptor(file, log_folder, language, modelSize, voices_folder, quantization)
res = transcriptor.transcribe()
Expand All @@ -53,7 +80,7 @@ res --> [["start", "end", "text", "speaker"], ["start", "end", "text", "speaker"

start: starting time of speech in seconds
end: ending time of speech in seconds
text: transcribed text for speech during start and end
text: transcribed text for speech during start and end
speaker: speaker of the text

voices_folder structure:
Expand All @@ -71,9 +98,17 @@ voices_folder
|--> ...
```

supported languages:
supported language codes:

```
"af", "am", "ar", "as", "az", "ba", "be", "bg", "bn", "bo", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "es", "et", "eu", "fa", "fi", "fo", "fr", "gl", "gu", "ha", "haw", "he", "hi", "hr", "ht", "hu", "hy", "id", "is","it", "ja", "jw", "ka", "kk", "km", "kn", "ko", "la", "lb", "ln", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn","mr", "ms", "mt", "my", "ne", "nl", "nn", "no", "oc", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk","sl", "sn", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "uk", "ur", "uz","vi", "yi", "yo", "zh", "yue"
```

supported language names:

['english', 'chinese', 'german', 'spanish', 'russian', 'korean', 'french', 'japanese', 'portuguese', 'turkish', 'polish', 'catalan', 'dutch', 'arabic', 'swedish', 'italian', 'indonesian', 'hindi', 'finnish', 'vietnamese', 'hebrew', 'ukrainian', 'greek', 'malay', 'czech', 'romanian', 'danish', 'hungarian', 'tamil', 'norwegian', 'thai', 'urdu', 'croatian', 'bulgarian', 'lithuanian', 'latin', 'maori', 'malayalam', 'welsh', 'slovak', 'telugu', 'persian', 'latvian', 'bengali', 'serbian', 'azerbaijani', 'slovenian', 'kannada', 'estonian', 'macedonian', 'breton', 'basque', 'icelandic', 'armenian', 'nepali', 'mongolian', 'bosnian', 'kazakh', 'albanian', 'swahili', 'galician', 'marathi', 'punjabi', 'sinhala', 'khmer', 'shona', 'yoruba', 'somali', 'afrikaans', 'occitan', 'georgian', 'belarusian', 'tajik', 'sindhi', 'gujarati', 'amharic', 'yiddish', 'lao', 'uzbek', 'faroese', 'haitian creole', 'pashto', 'turkmen', 'nynorsk', 'maltese', 'sanskrit', 'luxembourgish', 'myanmar', 'tibetan', 'tagalog', 'malagasy', 'assamese', 'tatar', 'hawaiian', 'lingala', 'hausa', 'bashkir', 'javanese', 'sundanese', 'burmese', 'valencian', 'flemish', 'haitian', 'letzeburgesch', 'pushto', 'panjabi', 'moldavian', 'moldovan', 'sinhalese', 'castilian']
```
"Afrikaans", "Amharic", "Arabic", "Assamese", "Azerbaijani", "Bashkir", "Belarusian", "Bulgarian", "Bengali","Tibetan", "Breton", "Bosnian", "Catalan", "Czech", "Welsh", "Danish", "German", "Greek", "English", "Spanish","Estonian", "Basque", "Persian", "Finnish", "Faroese", "French", "Galician", "Gujarati", "Hausa", "Hawaiian","Hebrew", "Hindi", "Croatian", "Haitian", "Hungarian", "Armenian", "Indonesian", "Icelandic", "Italian", "Japanese","Javanese", "Georgian", "Kazakh", "Khmer", "Kannada", "Korean", "Latin", "Luxembourgish", "Lingala", "Lao","Lithuanian", "Latvian", "Malagasy", "Maori", "Macedonian", "Malayalam", "Mongolian", "Marathi", "Malay", "Maltese","Burmese", "Nepali", "Dutch", "Norwegian Nynorsk", "Norwegian", "Occitan", "Punjabi", "Polish", "Pashto","Portuguese", "Romanian", "Russian", "Sanskrit", "Sindhi", "Sinhalese", "Slovak", "Slovenian", "Shona", "Somali","Albanian", "Serbian", "Sundanese", "Swedish", "Swahili", "Tamil", "Telugu", "Tajik", "Thai", "Turkmen", "Tagalog","Turkish", "Tatar", "Ukrainian", "Urdu", "Uzbek", "Vietnamese", "Yiddish", "Yoruba", "Chinese", "Cantonese",
```

### Audio preprocessing example:

Expand All @@ -83,9 +118,7 @@ from speechlib import PreProcessor
file = "obama1.mp3"
# convert mp3 to wav
PreProcessor.mp3_to_wav(file)
wav_file = "obama1.wav"
wav_file = PreProcessor.convert_to_wav(file)
# convert wav file from stereo to mono
PreProcessor.convert_to_mono(wav_file)
Expand All @@ -98,5 +131,4 @@ This library uses following huggingface models:

#### https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb
#### https://huggingface.co/Ransaka/whisper-tiny-sinhala-20k-8k-steps-v2
#### https://huggingface.co/openai/whisper-medium
#### https://huggingface.co/pyannote/speaker-diarization
67 changes: 67 additions & 0 deletions metrics.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
These metrics are from Google Colab tests.
These metrics do not take into account model download times.
These metrics are done without quantization enabled.
(quantization will make this even faster)

metrics for faster-whisper "tiny" model:
on cpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time:
speaker recognition time:
transcription time:

on gpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time: 24s
speaker recognition time: 10s
transcription time: 64s


metrics for faster-whisper "small" model:
on cpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time:
speaker recognition time:
transcription time:

on gpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time: 24s
speaker recognition time: 10s
transcription time: 95s


metrics for faster-whisper "medium" model:
on cpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time:
speaker recognition time:
transcription time:

on gpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time: 24s
speaker recognition time: 10s
transcription time: 193s


metrics for faster-whisper "large" model:
on cpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time:
speaker recognition time:
transcription time:

on gpu:
audio name: obama_zach.wav
duration: 6 min 36 s
diarization time: 24s
speaker recognition time: 10s
transcription time: 343s
4 changes: 3 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,6 @@ torch
torchaudio
pydub
pyannote.audio
speechbrain
speechbrain
accelerate
faster-whisper
6 changes: 3 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

setup(
name="speechlib",
version="1.0.7",
version="1.0.10",
description="speechlib is a library that can do speaker diarization, transcription and speaker recognition on an audio file to create transcripts with actual speaker names. This library also contain audio preprocessor functions.",
packages=find_packages(),
long_description=long_description,
Expand All @@ -19,6 +19,6 @@
"Programming Language :: Python :: 3.10",
"Operating System :: OS Independent",
],
install_requires=["transformers", "torch", "torchaudio", "pydub", "pyannote.audio", "speechbrain", "accelerate"],
python_requires=">=3.7",
install_requires=["transformers", "torch", "torchaudio", "pydub", "pyannote.audio", "speechbrain", "accelerate", "faster-whisper"],
python_requires=">=3.8",
)
2 changes: 1 addition & 1 deletion setup_instruction.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ for publishing:
pip install twine

for install locally for testing:
pip install dist/speechlib-1.0.6-py3-none-any.whl
pip install dist/speechlib-1.0.10-py3-none-any.whl

finally run:
twine upload dist/*
Expand Down
Loading

0 comments on commit d363894

Please sign in to comment.