Data scientist & TensorFlow developer

Introducing an ML framework written in Go from scratch.

Python is known as the king of data science. Its usage has exploded in the last decade and with ML libraries such as TensorFlow, PyTorch, Caffe and more, it has made code extraordinarily easy to write. The underlying C/C++ software provides the ultimate behind-the-scenes numerical computations, so that a data scientist can focus on the real work. Which is great, but doesn’t change the fact that python is slow.

I’m here to tell you why Go is the new preferred ML programming language and that my keras equivalent neural network architecture will lead the way.

Image for post
Photo by Kevin Ku on Unsplash

I started my programming journey with C++and have been on the verge of creating a game engine. Python, on the other hand, was something I’ve always tried to avoid. Moving from a low level language to a more abstract way of doing programming, was a big revelation. Go brings best of both the worlds, like lower level languages like C/C++, Go is a compiled language. That means performance is almost nearer to lower level languages. It also uses garbage collection to allocation and removal of the object. And, like Python its code is easy, fun and handles concurrency like no other. …


The Reason Why Cloud Services Have The Upper Hand In The Big Data Process

The Future of Cloud Services

The demand for computational power has never been greater than right now. Due to big data, companies and organisations are not only looking for the right people to handle the right part of the development process, they’re shifting their priorities towards bringing their products closer to their clients at a secure, manageble, but most importantly efficient, high throughput, low latency way.

Big Data Needs Big Servers

Moving to the Cloud is happening right now!

Companies, such as Netflix, Instagram, Pinterest and Apple have already felt the pressure of scalability and have turned to the Cloud to handle their overwhelming customer activity.

(Not to mention Google, along with all their services like Youtube, Drive, Photos and more, much more. These apps are all being hosted on Google Cloud…


Machine Learning

Get your ML app up and running in a matter of minutes.

Streamlit has proved out to be an outstanding opportunity for developers to share their machine learning instantly without having to worry too much about the underlying infrastructure.

In the following paragraphs, I’ll demonstrate the code written from the basis of a TensorFlow model, what changes you have to make and how I deployed my neural style transfer application in the most elegant manner with Streamlit Sharing.

Image for post
Image by Author

And yes, this is Angelina Jolie as if painted by Gustav Klimt.

I know you’ve been dying to see the end result, and without further ado:

Go ahead and try it out for yourself…


2021 Guide to a Modern GAN architecture

Binary Cross-Entropy loss or BCE loss is traditionally known as a metric for training GANs, but it’s far from being the best. Hopefully, after reading this article it’ll be quite clear why as well as what you can do about it.

Image for post
Photo by Luke Chesser on Unsplash

Role of the BCE loss during a GAN training

If you’re trying to build a binary classifier, chances are you were using the binary cross-entropy loss. The BCE has in itself a rather simple idea.

For the sake of the argument, picture a scenario where you are trying predict two classes ( it could be either cats versus dogs, Tesla versus Ferrari etc. ). The BCE loss is based on this question: ‘What is the probability of a picture/data being one of two classes?’. …


Progressive growing as a solution to DCGAN creating high resolution images

Deep Convolutional Generative Adversial Networks are the most excuisite type of neural network architecture in 2020. Generative models go way back to the 60s, largely created by Ian Goodfellow in 2014 and have unprecedented value regarding the future of deep learning.

For more on GANs or more specifically DCGANs, I encourage you to take a peek at the following articles:

Quick recap:

  • the generator recreates a sample from noise vector and it should be indisinguishable from the training distribution
  • since both the neural networks are differentiable, we can use their gradients to steer them in the correct direction
  • the generator is the primary goal, once we are satisfied with its results, the discriminator can be…


Implementing the popular classification algorithms in Golang from scratch

KNN

KNN stands for k-nearest neighbor and is an algorithm based on a simple idea and can be used for both classification and regression. The boundary becomes smoother with increasing value of k. With k increasing to infinity, it finally becomes all blue or all red depending on the total majority.

Image for post
Source: Scikit Learn

KNN works by iterating through the data and calculating the Euclidean distance and for the k nearest neighbor of the given point, the point falls into one of several categories.

Okay, now that you’re familiar with how the KNN algorithm works in theory, let’s dig into the code.

I have based my code on a previous plotting library I created which will make it easy to implement and plot. The basic interface is a point with three variables, the x and y coordinates and the boolean, which represents the category. …


Deep Learning

Breaking two of the most popular modern generative models by their core

“Generative Adversarial Networks is the most interesting idea in the last 10 years in Machine Learning.”Yann LeCun, Director of AI Research at Facebook AI.

Generative models have taken the machine learning world by storm, and thanks to a large community of research and the vast amount of attention they have been receiving, their progress is constantly on the rise.

Image for post
Source: GitHub

What makes generative models so desirable?

Generative modeling involves using a model to generate new examples that plausibly come from an existing distribution of samples, such as generating new images similar but specifically different from a dataset of existing images.

Not long ago, machine learning was about finding deep connections given enough data. The rise of the generative model is opening new opportunities. For example, take a look at GAN made people. It might come as a shock to you that they don’t exist. …


Deep Learning

The future of generative models

We have come a long way in creating generative adversarial networks, from a simple idea developed mainly by Ian Goodfellow to creating absolute shockers to the machine learning community. Without further ado, prepare to be amazed.

StyleGANs

StyleGANs have proposed not too far ago, and its main goal is to produce high-quality resolution images and a greater diversity of images in the output.

The coolest thing about StyleGANs is increased control over image features. This can be done either by adding features like shoes to a dog picture or mixing styles from two different generated images together.

Image for post
Source: Twitter


Intuition behind the most sophisticated computer vision algorithms

Computer vision is one of the hottest topics that uses machine learning. From pose estimation, vehicle detection, surveillance, you name it! I’ll be covering the essentials of how these algorithms came together and the intution behind it.

Image for post
Photo by Jeremy Yap on Unsplash

The object detection task is comprised of two parts: classification and detection. The key idea we want to accomplish is to detect objects from an image and pass the region sorrounding the object to the convolutional neural network for prediction. These concepts are fairly simple for a human being to grasp, but think about how you would implement a program to detect miscellaneous vehicles from a traffic camera at a rate of 25 images per second. …


Why, how and to what extent?

Creating and deploying machine learning models takes time. From data preprocessing, creating a data pipeline, building a model for your purposes, dealing with the model and architecture, doing this thing called hyperparameter tuning to deploying the model to production.

Hopefully, after reading this, you’ll be able to recognize what model architecture differences your model has to endure in order to reach the desired accuracy and most importantly, when hyperparameter tuning is needed and when not to overkill it.

For those of you who just tuned in, neural networks are comprised of both the parameters and the hyperparameters. Parameters are weights and biases and other things you cannot have an impact on. Hyperparameters are set at training time and have a direct impact on the model. Batch size, number of epochs, momentum, activations, number of layers and units, lambda coefficients for regularization, the dropout rate. …

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store