Ad Blocking with ddwrt

This was done on Asus RT-AC68U, running DD-WRT v3.0-r41686 std (12/10/19)

Let’s start with the script. This downloads two adlists and combines them into one. Then we’re restarting the service to pick up the changes. The reason for using curl instead of wget is because wget refused to work with https on my ddwrt build 🤷.

wget -qO /tmp/mvps
curl -k|grep "^" >> /tmp/mvps
killall -HUP dnsmasq
stopservice dnsmasq && startservice dnsmasq

Go to Administration -> Commands. Paste it in there and execute “Run Commands”.

Then, use the same command but execute it as Save Startup. Why? Well, I wanted to use a cron scheduler to run the script on a regular basis, but it just refused to work. Thus, I’m just scheduling a weekly reboot, which will trigger this command to update the ad lists 🤷

All you have left is to enable DNSMasq and Local DNS in Services tab. Then in the Additional Dnsmasq options add this:


Then go to Administration->Keep Alive and schedule a weekly/monthly reboot. Although, if the cron is broken in your build, this may not work either (to check, you can ssh to your router and check the timestamp on the /tmp/mvps file). In that case, you may just have to manually rerun the script from time to time to get the latest ad list.

Update 2020: I finally switched to pfsense and using pfBlockerNG provides a much better experience. Pi-hole is another great option – also much better than tinkering with dd-wrt.

Starting with PySpark – configuration

PySpark is a pain to configure.

For this guide I am using macOS Mojave.
Spark version 2.4.0
Python 3

Start by downloading the Spark Extract wherever – can be your home directory.

Install Java SDK. Important – some later versions don’t seem to be compatible with spark 2.4.0. Version 8 seems to work-

Install pyspark: pip install pyspark

Configure your zshrc/bash_profile – depending on what shell you use:

export SPARK_PATH=~/spark-2.4.0-bin-hadoop2.7
export PYSPARK_DRIVER_PYTHON="jupyter"

export PYSPARK_PYTHON=python3
alias snotebook='$SPARK_PATH/bin/pyspark --master local[2]'

export SPARK_HOME=~/spark-2.4.0-bin-hadoop2.7

export PYSPARK_SUBMIT_ARGS="--master local[2] pyspark-shell"

export JAVA_HOME=$(/usr/libexec/java_home)

Remember to reload your console.

Now, when you enter pyspark on your console, it’ll open a notebook.

You can validate if Spark context is available by entering this in your new notebook:

from pyspark import SparkContext
sc = SparkContext.getOrCreate()


My Thoughts about CSE 6250 Big Data Analytics in Healthcare (taking in Spring 2019)

This is my first class with Georgia Tech OMSCS program. I would prefer to take a different class as my first class (Machine Learning) so I have a better understanding of machine learning algorithms before trying to apply them, but as a newcomer you’re the last in a priority list. So, the other classes were completely full by the time I could make my selection.


It would help if you are familiar with Python and at least some machine learning algorithms.

The second homework involves some math that requires you to use a chained derivative rule. However, the majority of the tasks are more practical.


Very intense. You’ll have to use multiple languages and tools to accomplish your homework. For this year (Spring 2019) this includes Python, a variation of SQL, Scala; Hadoop, Pig, Spark… This class is taking more of my time than what I wanted to spend on it with a family and a full-time job.


The automated code grader has bugs.

My first homework had some points taken off because of the tasks we were not even supposed to do. I contacted the teaching assistant (TA) and had a full credit restored.

The second homework had some points taken off, because they split the script into parts and ran each part separately, while my code was expecting that the whole script would run as a whole. The assignment did not mention anything about this. I again had the full credit restored after talking to teaching assistants and demonstrating that the issue was with this unstated requirement. The teaching assistant was very responsive.

For my third homework (Spark + Scala), I initially received 0 points, because I was trying out some plugins and modified the scala project file. Then I forgot to remove it, and my homework could not be run with the automated grader. This time the first TA never responded (I waited for about 4 days and followed up once), but the second TA replied right away. He manually reran my code and I only lost a few points due to the bad project file.

The last, fifth homework (PyTorch + deep learning) requires a lot of time. You can take a part in Kaggle competition with other classmates as a part of this homework. I totally sucked at this one. I think I had some bugs in the data preprocessing stage, even though I passed the included unit tests.

A note about the homework submission process – if you miss a file or make a typo, you won’t know about it until you homework is officially graded. There is no immediate feedback on submission.


Docker – there are several ways you can run your homework assignments. If you don’t want to set up your home environment for each task. I used the provided docker image (there is also an option to use Azure virtual machine, but I did not use that option).

TEX editor – I used TeXstudio on Mac. You can use a regular Word and save to pdf for homework assignments which require a written answer. But, some of them require you to type formulas. And, although I found using TEX format extremely frustrating, at least the original homework assignment is provided both in tex and pdf formats. So you can start with that provided tex file and adjust fill out the answers.

Overleaf – something I discovered at the end of the class. This is a an online LaTex editor that allows you to collaborate with other students. As long as you sign up with your Georgia Tech email, it’s free.

Professor Involvement

Nonexistent. Your only chance to see a professor is through Udacity. The professor did not answer a single question on Piazza; it was 100% TAs.

Overall Impression

There is no need to jam so many technologies in a single class. Sometimes, I felt like I was just going through different sections of the homework filling out the missing parts (they usually provide a method signature and you’re supposed to write the code), without actually understanding the bigger picture. Not a bad class, especially if you can dedicate enough time to it, but would not recommend it as your first class.

Machine Learning Algorithms Problem Types

Types of problems we can solve with machine learning:

  • Regression- helps establish a relationship between one or more sets of data

    • Algorithms
      • Simple linear regression
      • Multiple Linear Regression
      • Polynomial Regression
      • Support Vector Machines (SVR)
      • Decision Tree
      • Random Forest Regression
    • Sample problem: calculate the time I get to work based on the route I take and the day of the week
  • Classification – helps us answer a yes/no type of question based on one or more sets of data

    • Algorithms
      • K Nearest Neighbors (KNN)
      • Kernel SVM
      • Logistic Regression
      • Naïve Bayes
      • Decision Tree
      • Random Forest Classification
    • Sample problem: will I be late or on time based on the route I take and the day of the week
  • Clustering – helps us discover clusters of data

    • Algorithms
      • Hierarchical Clustering
      • K Means
    • Sample problem: classify the customers into specific groups based on their income and spending
  • Association – helps determine an association among multiple events

    • Algorithms
      • Apriori
      • Eclat
    • Sample problem: if I like movie A, what other movies will likely to enjoy
  • Reinforcement – helps to better exploit while exploring

    • Algorithms
      • Thomson Sampling
      • UCB
    • Sample problem: we want to determine the most effective treatment. Instead of conduction a long-term random trial, use UCB or Thompson Sampling to determine the best treatment in a shorter interval
  • Natural Language Processing

    • Algorithms
      • Any classification algorithm, but most popular are Naïve Bayes and Random Forest
    • Sample problem: determine if an amazon review is positive or negative
  • Deep Learning – can help determine hard to establish non-linear relationships between multiple input parameters and some expected outcome

    • Algorithms
      • Artificial Neural Networks (ANN)
      • Convolutional Neural Networks (CNN) – especially helpful when processing images
    • Sample problem: based on the credit score, age, balance, salary, tenure… determine if a customer is likely to continue using your service or leave

Checking/Cleaning Disk Space on Linux

Check the disk space (may need to install ncdu first):

sudo ncdu /

Clean up unused stuff:

sudo apt-get clean
sudo apt-get autoclean
sudo apt-get autoremove

clean: clean clears out the local repository of retrieved package files. It removes everything but the lock file from /var/cache/apt/archives/ and /var/cache/apt/archives/partial/. When APT is used as a dselect(1) method, clean is run automatically. Those who do not use dselect will likely want to run apt-get clean from time to time to free up disk space.

autoclean: Like clean, autoclean clears out the local repository of retrieved package files. The difference is that it only removes package files that can no longer be downloaded, and are largely useless. This allows a cache to be maintained over a long period without it growing out of control. The configuration option APT::Clean-Installed will prevent installed packages from being erased if it is set to off.

autoremove: is used to remove packages that were automatically installed to satisfy dependencies for some package and that are no more needed.

See a related question on askubuntu:

Country Blocking with Rails and Cloudflare

Enable IP Geolocation in your Cloudflare panel – it should be in the Network tab.

The country code will come in HTTP_CF_IPCOUNTRY header.

Now we can add a before_action filter to block or redirect the users from a specific country (in the example below we redirect all EU countries… because who has the time to figure out GDPR):

class ApplicationController < ActionController::Base
  before_action :block_gdpr_countries

    'BE', 'EL', 'LT', 'PT',
    'BG', 'ES', 'LU', 'RO',
    'CZ', 'FR', 'HU', 'SI',
    'DK', 'HR', 'MT', 'SK',
    'DE', 'IT', 'NL', 'FI',
    'EE', 'CY', 'AT', 'SE',
    'IE', 'LV', 'PL', 'UK'

  def block_gdpr_countries
    return unless GDPR_COUNTRIES.include?(request.env['HTTP_CF_IPCOUNTRY'])
    redirect_to gdpr_path

Remember to skip this action in the corresponding controller (in our case gdpr_controller) if you use a redirect:

skip_before_action :block_gdpr_countries

Python with VSCode (using Anaconda)

Let’s install Anaconda from

Update your .bashrc or .zshrc with

export PATH="$HOME/anaconda3/bin:$PATH"

To change VS Code Python version:

Cmd Shift P -> Python - select interpreter (python 3)

Install auto-formatting and linting packages

conda install pylint
conda install autopep8

Create a Python debug configuration
stopOnEntry option is buggy with Python as of this post writing and makes it impossible to create breakpoints – so we set it to false for now.
Setting pythonPath to anaconda only necessary if your default VS code interpreter is different from Anaconda’s (and assuming you want to use Anaconda’s interpreter). Otherwise, you can leave it pointing to “${config:python.pythonPath}”

      "name": "Python conda",
      "type": "python",
      "request": "launch",
      "stopOnEntry": false,
      "pythonPath": "~/anaconda3/bin/python",
      "program": "${file}",
      "cwd": "${workspaceFolder}",
      "env": {},
      "envFile": "${workspaceFolder}/.env",
      "debugOptions": ["RedirectOutput"],
      "args": ["-i"]

Elixir Phoenix Cache

An implementation with ets.

Let’s start with implementation

defmodule SimpleCache do
  @table :simple_cache

  def init(_) do, [
      read_concurrency: true,
      write_concurrency: true

    {:ok, %{}}

  def start_link do
    GenServer.start_link(__MODULE__, [], name: __MODULE__)

  def fetch(key, expires_in_seconds, fun) do
    case lookup(key) do
      {:hit, value} ->

      :miss ->
        value = fun.()
        put(key, expires_in_seconds, value)

  defp lookup(key) do
    case :ets.lookup(@table, key) do
      [{^key, expires_at, value}] ->
        case now < expires_at do
          true -> {:hit, value}
          false -> :miss

      _ ->

  defp put(key, expires_in_seconds, value) do
    expires_at = now + expires_in_seconds
    :ets.insert(@table, {key, expires_at, value})

  defp now do

Update application.ex

 def start(_type, _args) do
    import Supervisor.Spec

    children = [
      supervisor(SimpleCache, [])
    opts = [strategy: :one_for_one, name: Supervisor]
    Supervisor.start_link(children, opts)

Finally, use it

    cache_for_seconds = 60
    key = 'key'

    SimpleCache.fetch(key, cache_for_seconds, fn ->
      {:ok, some_expensive_operation}

Relavent links: