Skip to content

Unofficial implementation of music separation model by Luo et.al.

License

Notifications You must be signed in to change notification settings

jhlusko/ChimeraNet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ChimeraNet

An implementation of music separation model by Luo et.al.

Getting started

Sample separation task with pretrained model
  1. Prepare .wav files to separate.

  2. Install library pip install git+https://github.com/leichtrhino/ChimeraNet

  3. Download pretrained model.

  4. Download sample script.

  5. Run script

python chimeranet-separate.py -i ${input_dir}/*.wav \
    -m model.hdf5 \
    --replace-top-directory ${output_dir}

Output in nutshell

  • the format of filename is ${input_file}_{embd,mask}_ch[12].wav.
  • embd and mask indicates that it was inferred from deep clustering and mask respectively.
  • ch1 and ch2 are voice and music channel respectively.
Train and separation examples

See Example section on ChimeraNet documentation.

Install

Requirements
  • keras
  • one of keras' backends (i.e. TensorFlow, CNTK, Theano)
  • sklearn
  • librosa
  • soundfile
Instructions
  1. Run pip install git+https://github.com/leichtrhino/ChimeraNet or any python package installer. (Currently, ChimeraNet is not in PyPI.)
  2. Install keras' backend if the environment does not have any. Install tensorflow if unsure.

See also

About

Unofficial implementation of music separation model by Luo et.al.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%