Reproduced from the original post

In this post, we are going to build an object counting model based on simple network architecture. Although we use the crowd dataset here, a similar solution can be applied to the rather more useful applications such as counting cells, crops, fruits, trees, cattle, or even endangered species in the wild.

There are different ways to count the object(s) in a given image. One could make use of R-CNN based models for object detection as shown in the example below

and that would work just fine, but what do you do when you have a…


Reproduced from here

TLDR, Fintech, and Artificial Intelligence ecosystems are on fire🔥. Augmenting Financial technologies with AI probably has the potential to go far beyond. <🔌> If you’re looking to collaborate or partner with someone who understands and has practical experience in Fintech and AI, I know someone 😎</🔌>.

I have used the data compiled by CBInsights for exploration. The data may or may not include the entire unicorn universe, so treat this as information based on the visible universe.

There are 440 companies in the 🦄Unicorn club. The valuation ranges from 1B to the maximum 75B. The plot below…


Source: https://en.wikipedia.org/wiki/Deepfake

It is great to see Twitter planning to do something about the use of synthetic and manipulated media (deepfakes). And, they are asking people to help shape their approach.

Link to the survey

While I appreciate their move, I share some concerns (not exhaustive) as a user of the platform — most of which I voiced in the survey feedback :

It gives too much power to Twitter, where they get to decide whether the content is real or fake, which in turn could impact the shape of conversation or information flow.

MODEL

  • At what level (in terms of evaluation…

Reproduced from my original post here

Update 22-Jan-2020: Part 2 of image caption using Machine learning, we use attention to improve the vanilla seq2seq model ➡️ https://www.katnoria.com/attn-nic/

In the Neural Image Caption generation post, we saw how to build and train a neural network that can generate the caption for any given image. We also saw how the choice of decoder impacts the quality of captions generated. And whilst we described how each decoder works, in words, I find it easier to understand the concepts when they are visualized.

source: https://www.katnoria.com/nlg-decoders/

Recollect that after we have trained our image caption model…


Employer Ratings Dashboard: https://dash.sprinkleai.com/flexdash

Notebooks are a fun and easy way to get started with data exploration, analysis, and even initial model building, despite some criticism around it. Imagine you have analyzed an interesting dataset and you wish to share the results with your audience.

Now, you could either share the notebook through GitHub/Nbviewer or export as HTML and host it online. You could also create a PDF report, Powerpoint, Word document or you could share a dashboard 😎

I hear you. Wouldn’t it be time-consuming to design and host dashboards? …


Source: TensorFlow Dev Summit

The TensorFlow dev summit concluded a few days ago. I enjoyed it much and to say it was beyond what I anticipated would be an understatement. There were many exciting announcements and I will try to cover some of them in this post.

tldr;


The post on Mixture density networks (MDN) is divided into two parts.

Part 1: Motivation for using MDN (this post)

Part 2: MDN Implementation using Tensorflow

Long form post is available here: https://www.katnoria.com/mdn/

Supervised machine learning models learn the mapping between the input features (x) and the target value (y). The regression models predict continuous output such as house price or stock price whereas classification models predict class/category for a given input such as predicting positive or negative sentiment given a sentence or paragraph. …


Excited about the upcoming Tensorflow 2.0 release? I sure am and Aurélien too

In this post, I share how I installed tensorflow nightly on my Ubuntu 18.04 box

We do not want to mess up the existing tensorflow installation, so we will start by creating a new environment. I use conda (miniconda) to manage my environment:

# create environment
conda create -n tf2 pip python=3.6
# activate the environment
conda activate tf2

Install Tensorflow from the nightly builds, we will use the GPU version.

pip install tf-nightly-gpu-2.0-preview# Uncomment the line below to install the cpu version
# pip install…


Credit: https://cloud.google.com/blog/products/ai-machine-learning/what-makes-tpus-fine-tuned-for-deep-learning

Update December 2020: I have published a major update to this post, where I cover TensorFlow, PyTorch, PyTorch Lightning, hyperparameter tuning libraries — Optuna, Ray Tune, and Keras-Tuner. Along with experiment tracking using Comet.ml and Weights & Biases.

The recent announcement of TPU availability on Colab made me wonder whether it presents a better alternative than GPU accelerator on Colab or training locally. So, I decided to take it for a spin using very simple notebook that trains a convnet to classify CIFAR10 images.

Without further ado, here are the results.

Katnoria

{ML/DL/DataViz} @ https://www.katnoria.com, Occasional photography @ https://500px.com/katnoria.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store