How to get Github Vanity url through Git.io?

If your Github user name is already taken by somebody (I typically choose thiru, but taken by somebody, so I had to choose ithiru). But if you want to share the url with others and still want to use thiru, you can do it through Github url shortener Git.io.

Use the following command and you should be able to get it. Please do not try the web version @ git.io, it will auto generate and if you try to do it again through command line it will return the same shortened url.

-F "code=vanity"

will make sure you get the custom url rather than generated one.

$ curl -i https://git.io -F "url=https://github.com/ithiru" -F "code=thiru"
HTTP/1.1 100 Continue

HTTP/1.1 201 Created
Server: Cowboy
Connection: keep-alive
Date: Thu, 22 Sep 2016 05:05:35 GMT
Status: 201 Created
Content-Type: text/html;charset=utf-8
Location: https://git.io/vi76a
Content-Length: 25
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Runtime: 0.256529
X-Node: 27a81f0a-cb95-4f6d-94d6-0c9891e3714b
X-Revision: 4fdc60de6311e6a3aa31e19bc7b3aad7e85d33a6
Strict-Transport-Security: max-age=31536000; includeSubDomains
Via: 1.1 vegur

In case you by mistake tried @ https://git.io via Browser, you can try the anchor trick to fool git.io that the url is unique.

$ curl -i https://git.io -F "url=https://github.com/ithiru#start-of-content" -F "code=thiru"
HTTP/1.1 100 Continue

HTTP/1.1 201 Created
Server: Cowboy
Connection: keep-alive
Date: Thu, 22 Sep 2016 05:12:47 GMT
Status: 201 Created
Content-Type: text/html;charset=utf-8
Location: https://git.io/thiru
Content-Length: 42
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Runtime: 0.255721
X-Node: 722fe806-9844-4265-9a8f-c51d2c170cf7
X-Revision: 4fdc60de6311e6a3aa31e19bc7b3aad7e85d33a6
Strict-Transport-Security: max-age=31536000; includeSubDomains
Via: 1.1 vegur

via Github Blog

Jeyamohan’s Creations – Web App

Writer Jeyamohan’s is a famour writer in Tamil Nadu, India. Bio at Wikipedia
When I started reading his book(s) on the Indian Epic Mahabharata named ‘Venmurasu’ (வெண்முரசு), I was drawn towards it more deeply due to its artistic description of the events.

But he was releasing one chapter each day and it was difficult for me to follow, I need to read the whole book in one stretch, that’s how I am wired. So I stopped reading and waited till he completes each book. Then I would read them all.

While reading I found that the Venmurasu Website was based on wordpress blog and it was really not a good way to present a Novel, it just displays the chapter in reverse chronological order rather than the typical chapter flow of a book.

I started thinking about screen scraping the content and present in Responsive Web Design, so it would easier for me to read on my Phone, Tablet or Desktop wherever I am.

App Development

Sep 2014 I had started doing screen scraping with Node.js app and UI using Node Express module.

Screen Scraping

  • Navigating through the url
  • Extract the content using XPath with cheeio package
  • Extracted the chapter art image url, download and create create thumbnail and the full image after downloading
  • Persist the data into the tables by Book, Volume, Chapters

UI Design

As I wanted to read through various devices I wanted to use Responsive Web Design. Even though Twitter Bootstrap is very popular I went for Zurb Foundation as it just gives barebones and allows you to customize using the SASS variables. That also gives complete control over the design.

But during designing the app, I didn’t customize the Zurb Foundation, just used it as it is by downloading the css/js files and put them under public and started designing the UI by hand coding the html and checking them on the browser.

Only later I started using all the productivity tools like Gulp, Twig, SASS customizing, Browser Sync and others.

Infinte Scroll

I did the infinite scroll for two reasons, I can just hit the bottom of the each chapter to load the content of the next chapter few times which would cache the chapters on the browser so I can read them even if I am offline. And the other one to avoid number of clicks/taps the user needs to do to navigate and read, the chapters should load and ready for them to continue instead of they click/tap on the next link, wait for the page to load, that would distract the reading flow.

Pre-requisite

Get the Files

Gitub Project

git [email protected]:ithiru/jemo.git

Data

Before running the application, the website needs to be scrapped and loaded to database.

Please load the data using the SQL available in the data folder.

mysql -h <host> -u <user> -p <database> < data.sql

.env

Please create .env file under node-app folder with the following values to connect to MySQL instance where the data has been loaded.

DB_HOST=<host>
DB_USER=<user>
DB_PASS=<password>
DB_NAME=<db>

How to Run the application


git clone [email protected]:ithiru/jemo.git
cd jemo/Docker
docker build -t jemo .
cd ..
docker run -it --rm -v $HOME/jemo/node-app:/app --link mysql:mysql -p

Screenshots

iPhone 5

iPhone5-Portrait

iPhone 6

iPhone6-Portrait

iPhone 6+

iPhone6Plus-portrait

iPad

iPad

Mac Book Pro

MacBookPro

App Demo

New UI

New-UI

Digital Ocean Setup with Dockers

Digital Ocean Setup

Dockers

Digital Ocean setup using Dockers and Nginx to run my Blog and my friend’s and Ponniyin Selvan Community

The day Dockers announced the Release Candidate and after reading about the application virutalization I could immediately sense the advantages related to isolating different applications and upgrade them without affecting other applications/components. Started experimenting it on Digital Ocean on May 26th 2014 and never looked back since then.

Some of the advantages I found during my experiments

  • I am able to hide application instances to the public internet. For example, mysql is accessible only by the other Docker Containers
  • I have a wordpress blog which is bit old due to plugin dependency and doesn’t work properly in new PHP 5.4.x, so I had to use both old PHP and PHP 5.4+ at the same time. Docker allows me to create and run both without having to worry about conflicting dependency components and upgrades. I proxy to respective PHP Fast CGI docker instance based on the domain/blog.
  • I was able to switch from MySQL to MariaDB silently without even worrying as everything else including the Docker name is same (mysql!) but internally it is running MariaDB.
  • Able to run different Linux distributions on top of Ubuntu due to dependency of the component. Without dockers then I would have to provision another Digital Ocean droplet and install that particular OS.
  • Able to run multiple instances of the same Docker Image, upgrade and test and make sure the newer version of the application works before switching over to production mode.
  • Portable, when I upgraded from Ubuntu 14.04 to Ubuntu 15.10, I ran into some issues on the same Digital Ocean Droplet. I just created a new droplet with 15.10, exported the Docker Image from 14.04 Droplet and imported to the new Droplet. No need to go through all installation procedures again.
  • Can create multiple versions of the Image and switch between them.
  • Light weight/Less Memory/Cheaper than creating new VPS.

Dockers Running

Dockers-Running-Digital-Ocean

Docker Images

Docker-Images-Digital-Ocean

Dockers running At Work

Docker-Setup-At-Work

Dockers Images At Work

Docker-Images-At-Work

Nginx

When I started setting public websites with Dreamhost on 7th Dec 2006 they offered cheap hosting for $2 a month, it was shared hosting where you don’t get to run lot of apps due to memory/cpu restrictions. My websites were served using Apache Web Sever

After my websites started creating some traffic, Dreamhost on 26th Dec 2008 sent a nice email stating that my websites consuming lot of CPU/RAM and better move to private server with some promotion. I took it and started using Lighttpd with some Lua scripting for couple of years.

Finally moved to Nginx on 26th Jan 2010 and never looked at any other web server since then. Nginx is fast, consumes less memory and able to customize easily than Apache.

When I was doing a POC for work to convert an existing Web application to Hybrid application, I had to use some Lua scripting at the Nginx layer to do Single Sign On. After that I started using Nginx from OpenRestry distribution which allows to have Lua scripting which allows me to do much more than just static configurations.

Backup and alerts using IFTTT and Pushbullet

Instead of sending/checking email to ensure that my backups are running, when does it start and ends. I am using IFTTT Maker Channel which in turn push the data to Pushbullet and PushBullet App on Android Phone shows Notification that it completed.

I am creating daily Incremental backup with Day Name and Full Monthly backup with Month Name and Weekly back with Week number of the month. This will allow me to control how many backup file I create and don’t have to go and remove older backups.

curl -X POST -H "Content-Type: application/json" -d '{"value1":"Starte","value2":"test","value3":"test"}' https://maker.ifttt.com/trigger/ubuntu_daily_backup/with/key/RedactedMakerKey
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd /data
sudo tar -czvf backups/$(date '+%A').tar.gz .  --exclude=swap4G.swap --exclude=backups --listed-incremental=backups/data.snar
sudo mv -f backups/$(date '+%A').tar.gz /backups/daily/
curl -X POST -H "Content-Type: application/json" -d '{"value1":"Completed","value2":"test","value3":"test"}' https://maker.ifttt.com/trigger/ubuntu_daily_backup/with/key/RedactedMakerKey

Screenshot_2016-09-07-15-32-14-088

backup-schedule