Reading Time: 2 minutes

Deepgram, a YC backed startup using machine learning to analyze audio data for businesses, is open sourcingan internal deep learning tool called Kur.The release should furtherhelp those interested in the space get their ideas off the ground more easily. The startup is also including 10 hours of transcribed audio, spliced into 10 second increments, to expedite the training process.

Similar to Keras, Kur further abstracts the process of building and training deep learning models.By making deep learning easier, Kur is also makingimage recognition and speech analysis more accessible.

Scott Stephenson, CEO of Deepgram, explained to me that when the company was first getting off the ground, the team usedLibriSpeech, an onlinedataset of audiobooks in the public domain split up and labeled for training early machine learningmodels.

Deepgram isnt reinventing the wheel with its release. Coupled with data dumps and open source projects from startups, universities and big tech companies alike, frameworks like Tensorflow, Caffe and Torch have become quite useable.The ImageNet database has worked wonders for image recognition, and many developers use VoxForge for speech, but more open source data is never a bad thing.

You can start withclassifying images and end up with self driving cars, addedStephenson. The point is giving someone that first little piece and then people can change the model and make it do something different.

Getting Kur into the hands of developers will also help Deepgram with recruiting talent. The strategy has proved itself quite useful for large tech companies looking to recruit technical machine learning and data science engineers.

Via Kurhub.com, developers will soon be able to share models, data sets and weights to spur more innovation in the space.Deepgram eventually wants to release weights for the data-set being released today so DIY-ers canavoid processor intensive training altogether. Even with a relatively modest 10 hours of audio, modelsstill take about a day to train on a GPU and considerably longer with an off-the-shelf computer.

If you end up exhausting the Deepgram data set, you can also easily expand it with your own data. All you have to do is create WAV files with embedded transcriptions in 10 second increments. You canfeed data-hungry deep learning models with moreresources in the public domain to improve accuracy.

Read more: