Starting with PySpark – configuration

No Comments

PySpark is a pain to configure.

For this guide I am using macOS Mojave.
Spark version 2.4.0
Python 3

Start by downloading the Spark https://spark.apache.org/downloads.html. Extract wherever – can be your home directory.

Install Java SDK. Important – some later versions don’t seem to be compatible with spark 2.4.0. Version 8 seems to work- https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

Install pyspark: pip install pyspark

Configure your zshrc/bash_profile – depending on what shell you use:

export SPARK_PATH=~/spark-2.4.0-bin-hadoop2.7
export PYSPARK_DRIVER_PYTHON="jupyter"
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"

export PYSPARK_PYTHON=python3
alias snotebook='$SPARK_PATH/bin/pyspark --master local[2]'

export SPARK_HOME=~/spark-2.4.0-bin-hadoop2.7
export PATH=$SPARK_HOME/bin:$PATH
export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.7-src.zip:$PYTHONPATH

export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.8.2.1-src.zip:$PYTHONPATH
export PYSPARK_SUBMIT_ARGS="--master local[2] pyspark-shell"

export JAVA_HOME=$(/usr/libexec/java_home)

Remember to reload your console.

Now, when you enter pyspark on your console, it’ll open a notebook.

You can validate if Spark context is available by entering this in your new notebook:

from pyspark import SparkContext
sc = SparkContext.getOrCreate()

References: https://medium.com/@yajieli/installing-spark-pyspark-on-mac-and-fix-of-some-common-errors-355a9050f735

Categories: Machine Learning Tags: Tags:

My Thoughts about CSE 6250 Big Data Analytics in Healthcare (taking in Spring 2019)

2 Comments

This is my first class with Georgia Tech OMSCS program. I would prefer to take a different class as my first class (Machine Learning) so I have a better understanding of machine learning algorithms before trying to apply them, but as a newcomer you’re the last in a priority list. So, the other classes were completely full by the time I could make my selection.

Prerequisites

It would help if you are familiar with Python and at least some machine learning algorithms.

The second homework involves some math that requires you to use a chained derivative rule. However, the majority of the tasks are more practical.

Effort

Very intense. You’ll have to use multiple languages and tools to accomplish your homework. For this year (Spring 2019) this includes Python, a variation of SQL, Scala; Hadoop, Pig, Spark… This class is taking more of my time than what I wanted to spend on it with a family and a full-time job.

Grading

The automated code grader has bugs.

My first homework had some points taken off because of the tasks we were not even supposed to do. I contacted the teaching assistant (TA) and had a full credit restored.

The second homework had some points taken off, because they split the script into parts and ran each part separately, while my code was expecting that the whole script would run as a whole. The assignment did not mention anything about this. I again had the full credit restored after talking to teaching assistants and demonstrating that the issue was with this unstated requirement. The teaching assistant was very responsive.

For my third homework (Spark + Scala), I initially received 0 points, because I was trying out some plugins and modified the scala project file. Then I forgot to remove it, and my homework could not be run with the automated grader. This time the first TA never responded (I waited for about 4 days and followed up once), but the second TA replied right away. He manually reran my code and I only lost a few points due to the bad project file.

The last, fifth homework (PyTorch + deep learning) requires a lot of time. You can take a part in Kaggle competition with other classmates as a part of this homework. I totally sucked at this one. I think I had some bugs in the data preprocessing stage, even though I passed the included unit tests.

A note about the homework submission process – if you miss a file or make a typo, you won’t know about it until you homework is officially graded. There is no immediate feedback on submission.

Tools

Docker – there are several ways you can run your homework assignments. If you don’t want to set up your home environment for each task. I used the provided docker image (there is also an option to use Azure virtual machine, but I did not use that option).

TEX editor – I used TeXstudio on Mac. You can use a regular Word and save to pdf for homework assignments which require a written answer. But, some of them require you to type formulas. And, although I found using TEX format extremely frustrating, at least the original homework assignment is provided both in tex and pdf formats. So you can start with that provided tex file and adjust fill out the answers.

Overleaf – something I discovered at the end of the class. This is a an online LaTex editor that allows you to collaborate with other students. As long as you sign up with your Georgia Tech email, it’s free.

Professor Involvement

Nonexistent. Your only chance to see a professor is through Udacity. The professor did not answer a single question on Piazza; it was 100% TAs.

Overall Impression

There is no need to jam so many technologies in a single class. Sometimes, I felt like I was just going through different sections of the homework filling out the missing parts (they usually provide a method signature and you’re supposed to write the code), without actually understanding the bigger picture. Not a bad class, especially if you can dedicate enough time to it, but would not recommend it as your first class.

Categories: Machine Learning Tags: Tags:

Machine Learning Algorithms Problem Types

No Comments

Types of problems we can solve with machine learning:

  • Regression- helps establish a relationship between one or more sets of data

    • Algorithms
      • Simple linear regression
      • Multiple Linear Regression
      • Polynomial Regression
      • Support Vector Machines (SVR)
      • Decision Tree
      • Random Forest Regression
    • Sample problem: calculate the time I get to work based on the route I take and the day of the week
  • Classification – helps us answer a yes/no type of question based on one or more sets of data

    • Algorithms
      • K Nearest Neighbors (KNN)
      • Kernel SVM
      • Logistic Regression
      • Naïve Bayes
      • Decision Tree
      • Random Forest Classification
    • Sample problem: will I be late or on time based on the route I take and the day of the week
  • Clustering – helps us discover clusters of data

    • Algorithms
      • Hierarchical Clustering
      • K Means
    • Sample problem: classify the customers into specific groups based on their income and spending
  • Association – helps determine an association among multiple events

    • Algorithms
      • Apriori
      • Eclat
    • Sample problem: if I like movie A, what other movies will likely to enjoy
  • Reinforcement – helps to better exploit while exploring

    • Algorithms
      • Thomson Sampling
      • UCB
    • Sample problem: we want to determine the most effective treatment. Instead of conduction a long-term random trial, use UCB or Thompson Sampling to determine the best treatment in a shorter interval
  • Natural Language Processing

    • Algorithms
      • Any classification algorithm, but most popular are Naïve Bayes and Random Forest
    • Sample problem: determine if an amazon review is positive or negative
  • Deep Learning – can help determine hard to establish non-linear relationships between multiple input parameters and some expected outcome

    • Algorithms
      • Artificial Neural Networks (ANN)
      • Convolutional Neural Networks (CNN) – especially helpful when processing images
    • Sample problem: based on the credit score, age, balance, salary, tenure… determine if a customer is likely to continue using your service or leave

Checking/Cleaning Disk Space on Linux

No Comments

Check the disk space (may need to install ncdu first):

sudo ncdu /

Clean up unused stuff:

sudo apt-get clean
sudo apt-get autoclean
sudo apt-get autoremove

clean: clean clears out the local repository of retrieved package files. It removes everything but the lock file from /var/cache/apt/archives/ and /var/cache/apt/archives/partial/. When APT is used as a dselect(1) method, clean is run automatically. Those who do not use dselect will likely want to run apt-get clean from time to time to free up disk space.

autoclean: Like clean, autoclean clears out the local repository of retrieved package files. The difference is that it only removes package files that can no longer be downloaded, and are largely useless. This allows a cache to be maintained over a long period without it growing out of control. The configuration option APT::Clean-Installed will prevent installed packages from being erased if it is set to off.

autoremove: is used to remove packages that were automatically installed to satisfy dependencies for some package and that are no more needed.

See a related question on askubuntu: https://askubuntu.com/questions/3167/what-is-difference-between-the-options-autoclean-autoremove-and-clean

Categories: Linux

Country Blocking with Rails and Cloudflare

No Comments

Enable IP Geolocation in your Cloudflare panel – it should be in the Network tab.

The country code will come in HTTP_CF_IPCOUNTRY header.

Now we can add a before_action filter to block or redirect the users from a specific country (in the example below we redirect all EU countries… because who has the time to figure out GDPR):

class ApplicationController < ActionController::Base
  before_action :block_gdpr_countries

  GDPR_COUNTRIES = [
    'BE', 'EL', 'LT', 'PT',
    'BG', 'ES', 'LU', 'RO',
    'CZ', 'FR', 'HU', 'SI',
    'DK', 'HR', 'MT', 'SK',
    'DE', 'IT', 'NL', 'FI',
    'EE', 'CY', 'AT', 'SE',
    'IE', 'LV', 'PL', 'UK'
  ]

  def block_gdpr_countries
    return unless GDPR_COUNTRIES.include?(request.env['HTTP_CF_IPCOUNTRY'])
    redirect_to gdpr_path
  end
end

Remember to skip this action in the corresponding controller (in our case gdpr_controller) if you use a redirect:

skip_before_action :block_gdpr_countries

Categories: Rails

Python with VSCode (using Anaconda)

No Comments

Let’s install Anaconda from https://www.anaconda.com/download/

Update your .bashrc or .zshrc with

export PATH="$HOME/anaconda3/bin:$PATH"

To change VS Code Python version:

Cmd Shift P -> Python - select interpreter (python 3)

Install auto-formatting and linting packages

conda install pylint
conda install autopep8

Create a Python debug configuration
stopOnEntry option is buggy with Python as of this post writing and makes it impossible to create breakpoints – so we set it to false for now.
Setting pythonPath to anaconda only necessary if your default VS code interpreter is different from Anaconda’s (and assuming you want to use Anaconda’s interpreter). Otherwise, you can leave it pointing to “${config:python.pythonPath}”

{
      "name": "Python conda",
      "type": "python",
      "request": "launch",
      "stopOnEntry": false,
      "pythonPath": "~/anaconda3/bin/python",
      "program": "${file}",
      "cwd": "${workspaceFolder}",
      "env": {},
      "envFile": "${workspaceFolder}/.env",
      "debugOptions": ["RedirectOutput"],
      "args": ["-i"]
    }
Categories: Python

Elixir Phoenix Cache

No Comments

An implementation with ets.

Let’s start with implementation

defmodule SimpleCache do
  @table :simple_cache

  def init(_) do
    :ets.new(@table, [
      :set,
      :named_table,
      :public,
      read_concurrency: true,
      write_concurrency: true
    ])

    {:ok, %{}}
  end

  def start_link do
    GenServer.start_link(__MODULE__, [], name: __MODULE__)
  end

  def fetch(key, expires_in_seconds, fun) do
    case lookup(key) do
      {:hit, value} ->
        value

      :miss ->
        value = fun.()
        put(key, expires_in_seconds, value)
        value
    end
  end

  defp lookup(key) do
    case :ets.lookup(@table, key) do
      [{^key, expires_at, value}] ->
        case now < expires_at do
          true -> {:hit, value}
          false -> :miss
        end

      _ ->
        :miss
    end
  end

  defp put(key, expires_in_seconds, value) do
    expires_at = now + expires_in_seconds
    :ets.insert(@table, {key, expires_at, value})
  end

  defp now do
    :erlang.system_time(:seconds)
  end
end


Update application.ex

 def start(_type, _args) do
    import Supervisor.Spec

    children = [
      supervisor(SimpleCache, [])
    ]
    opts = [strategy: :one_for_one, name: Supervisor]
    Supervisor.start_link(children, opts)
  end

Finally, use it

    cache_for_seconds = 60
    key = 'key'

    SimpleCache.fetch(key, cache_for_seconds, fn ->
      {:ok, some_expensive_operation}
    end)

Relavent links:
https://stackoverflow.com/questions/35218738/caching-expensive-computation-in-elixir
https://dockyard.com/blog/2017/05/19/optimizing-elixir-and-phoenix-with-ets

Categories: Elixir, Phoenix, Web

Setting up VS Code with Rails, Elixir, JavaScript

No Comments

Let’s make sure we can start VS Code from the terminal:

Command + Shift + P
Type Shell
Select Command : Install code in PATH

Extensions

Rails

JavaScript

Git

Elixir

Other stuff

Personal Settings

"editor.formatOnSave": true,
  "editor.fontLigatures": true,
  "editor.fontFamily": "FiraCode-Retina",
  "editor.fontSize": 18,
  "editor.renderIndentGuides": true,
  "files.exclude": {
    "**/.git": true,
    "**/node_modules": true,
    "**/bower_components": true,
    "**/tmp": true,
    "tmp/**": true,
    "**/vendor": true,
    "vendor": true,
    ".bundle": true,
    ".github": true,
    ".sass-cache": true,
    "features/reports": true
  },

  "editor.tabSize": 2,
  "prettier.singleQuote": true,
  "workbench.colorTheme": "Monokai",
  "window.zoomLevel": 0,
  "editor.renderWhitespace": "boundary",
  "editor.renderControlCharacters": true,

  "ruby.lint": {
    "rubocop": true,
    "ruby": true,
    "fasterer": true,
    "reek": false,
    "ruby-lint": false
  },
  "editor.quickSuggestions": {
    "strings": true
  },

  "cucumberautocomplete.steps": [
    "features/step_definitions/*.rb",
    "features/step_definitions/**/*.rb",
    "features/step_definitions/**/**/*.rb"
  ],
  "cucumberautocomplete.syncfeatures": "features/*feature"

Some common exclusions for .solagraph.yml (can place it in the root of your project)

---
include:
- "app/**/*.rb"
- "lib/**/*.rb"
- "engines/engine_name/app/**/*.rb"
- "engines/engine_name/lib/**/*.rb"
- "config/**/*.rb"
exclude:
- app/javascript/**/*
- node_modules/**/**
- spec/**/*
- test/**/*
- vendor/**/*
- ".bundle/**/*"
- .bundle/**/*
- uploads/**/*
- .bundle/**/*
- .git/**/*
- engines/engine_name/.bundle/**/*
- engines/engine_name/vendor/**/*
- coverage/**/*
require: []
domains: []
reporters:
- rubocop
- require_not_found
plugins: []
require_paths: []
max_files: 5000
plugins:
- runtime

Categories: Development Setup, Rails, Ruby