In 2017, I invested everything I had into Crypto. I didn’t have much. But I was all in. As you can probably guess from the title, things didn’t go so well.
For the first couple of months I was ecstatic. Prices kept going up. I was making more money in 24 hours that I had ever made in a regular job.
I began watching crypto youtubers “explain” how various coins worked. I bought into the hype completely.
I told my friends to buy Crypto. “You don’t want to tell your grandkids you missed out, do you?”
I’d go onto coinmarketcap…
Can you represent 8,018,009 as a product of two prime numbers?
You can use whatever calculator or program you like. I’ll wait.
I couldn’t do it either. Nor can a computer.
This is the guiding principle behind Crypto Maths.
In this post we’ll go from a private key to an address using all the mathematical functions in between. Much of this comes from Chapter 4 of the ethereumbook.
You’ve heard many definitions of a Public Key. But here’s the real one:
“An Ethereum public key is a point on an elliptic curve, meaning it is a set of x…
I’ve created a dataset for training film restoration models.
The video above shows a sample. On the left is a video of a great star wars scene. On the right is the same video made crappier.
The extracted frames are available here: https://www.kaggle.com/spiyer/old-film-restoration-dataset/. You could use this to train a film restoration model (like I’ve been doing). Enjoy!
Properly cleaned data is not as abundant as people make it out to be.
I’ve been trying to restore the star wars deleted scenes for some time now. My attempts have been far from perfect.
Recently I thought that if I…
Talk is cheap. Show me the code. — Linus Torvalds
There’s a lot of talk about swimming pool detection from aerial imagery.
You’re probably interested in a code first example. I was too. But I couldn’t find one.
I decided to make my own.
It’s not perfect. It’s not pretty. But it seems to work.
All code is on Github. Criticism is appreciated.
To make this you’ll need data. Lots of labelled training data. This can be tricky to obtain. Particularly when your budget is as low as mine ($0).
I managed to find a government resource that gives you…
In this post we’ll be trying to segment canopy cover and soil in satellite imagery.
Ideally we want to go from a regular satellite image:
Recently, I applied KMeans clustering to Satellite Imagery and was impressed by the results. I’ll tell you the tricks I learned so you don’t waste your time.
Things to note:
Each Terravion image has the…
I’m often asked: “What music do you listen to?”. And I’d like to say something cool like ‘The Clash’ or ‘Black Sabbath’. But in reality I listen to a lot of uncool bands (Tears for fears for example).
To answer that question honestly, I’ll need to look at my most played songs on iTunes.
That got me thinking. Is there a way to create a wallpaper collage that consists of the top 30 or bands you actually listen to?
Not the stuff you say you listen to. But the stuff you actually listen to.
Well I went ahead and created…
In production the stakes are high. People are going to be reading the outputs from the model. And the outputs better make sense.
Recently my team and I created a NLP classifier and put it into production on a large insurance dataset. It uses TfidfVectorizer and LinearSVC to classify free-text.
But I quickly realised just that putting something into production is so different to the theory.
Data is now growing faster than processing speeds. One of the many solutions to this problem is to parallelise our computing on large clusters. Enter PySpark.
However, PySpark requires you to think about data differently.
Instead of looking at a dataset row-wise. PySpark encourages you to look at it column-wise. This was a difficult transition for me at first. I’ll tell you the main tricks I learned so you don’t have to waste your time searching for the answers.
I’ll be using the Hazardous Air Pollutants dataset from Kaggle.
This Dataset is
df = spark.read.csv(‘epa_hap_daily_summary.csv’,inferSchema=True, header =True)