Posts

Showing posts from March, 2018

Rails Goodness: bundle exec when Ruby versions mismatch errors

Rails is sometiems wierd, it complains that the ruby version you are running is different than what is specified in Gemfile. But when I see the version, using $ ruby -v, it all looks good. Only solution is to force rails to look for ruby version, inside the directory you are working on, and the way to do it is, using bundle exec, like so   » RAILS_ENV=development bundle exec rake app:run   Thank you!!

Git: How to generate a diff file in git

Tools like reviewboard, allow you the generate code reviews, although they have a rbt tool to generate the code review with diff, sometimes there is a need to generate this diff file manually and upload it, Below command  $  git show --full-index > ~/mytest.diff will generate the diff file which you can use for the same

Git: How to Squash your commits into one commit before merging

When you are doing a feature development, typically there are many local commits which happen, its always a good idea that squash all those commits into one commit and then merge them to the master branch. Below are the steps to do the same #Step1 : Start git rebase in interactive mode $ git rebase -i master #once you do this, you will all your commits in an editor likewise pick 1b6f7fed87 feature1 pick 1b6f7fed85 old commit1 pick 1b6f7fed83 old commit2 pick 1b6f7fed81 old commit3 # Rebase 806f77feab..1b6f7fed87 onto 806f77feab (1 command) # # Commands: # p, pick = use commit # r, reword = use commit, but edit the commit message # e, edit = use commit, but stop for amending # s, squash = use commit, but meld into previous commit # f, fixup = like "squash", but discard this commit's log message # x, exec = run command (the rest of the line) using shell # d, drop = remove commit # # These lines can be re-ordered; they are executed from top to bottom. # # If you remove

Rails Goodness : What is Jsonify in Rails, Why is needed for REST based controllers

JSONIFY is one of the original ways on how you can Rails converts its Models to JSON Views. With .jsonfiy in your View, you can move the shenanigans of converting the models to JSON views .jsonify file. It will also allow you to filter out  or convert the model to the desired JSON output. If you are using pure REST controllers with no views, you need to have a .jsonfiy file (blank file in view folder is fine as well), for the controllers and rspec to work. sample .jsonfiy file json.applications(@applications) do |app| json.id app.id json.name app.name.force_encoding(Encoding::UTF_8) json.guid app.guid.to_s json.icon app.icon_path json.created_at app.created_at.localtime json.created_at_human l(app.created_at.localtime, :format => :human_date) json.updated_at app.updated_at.localtime json.updated_at_human l(app.updated_at.localtime, :format => :human_date) end

Rails Goodness : How to Test/Update Your Migration, Add a forgotten index

Once you have create a migration say, class CreateEmployees < ActiveRecord::Migration def change create_table :employees do | t | t.string :first_name t.string :last_name t.string :icon t.integer :emp_id t.timestamps end end end And now lets say you need to add an index to it, below are the steps Step1. Rollback the current migration   RAILS_ENV=development rake db:rollback             Step2 Add the Index to the migration manually class CreateEmployees < ActiveRecord::Migration def change create_table :employees do | t | t.string :first_name t.string :last_name t.string :icon t.integer :emp_id t.timestamps end add_index :employees, [:first_name] end end Step3. Test the migration by moving 10 steps behind in schema.rb and then redoing till current   RAILS_ENV=development rake db:migrate:redo STEP =10 The Steps can be any number, just to ensure robustness that schema.rb retains its curre

Rails - Part1 - Employee CRUD API - Create a table , model and migration in Rails

Rails is a convention over configuration framework, Rails4 makes life simpler on how you can quickly put things in perspective. In this Series, we will start with a Employee Model, Create its corresponding tables, migrations and then expose the Employee Model in the Rails API. https://github.com/makrand-bkar/emp-hello Part1 - We will create the Model, its table and corresponding Factory and test API. Problem Statement Lets say we want to create a Simple Employee  table Employee {   first_name : string,   last_name :  string,   icon          :  string,   emp_no    :  integer } Assumptions I am assuming that you already have rails project running and we will just adding to the existing product. You can clone the initial project from https://github.com/makrand-bkar/emp-hello Step1 : Create a Model for Employee RAILS_ENV =development rails g model Employee first_name: string last_name: string icon: string emp_id: integer Step1 will create the model, table, r

Running The Hadoop Examples Wordcount

Introduction This is the followup session notes, from the earlier session on Hadoop Wordcount program and notes. In earlier session, we covered Why wordcount and the anatomy of it, Also, wrote the program in eclipse. In this session we will * How to Run wordcount program in eclipse * How to Run wordcount program given in examples.jar in singlenode and multinode env * Go through the hadoop urls to see the stats of the program * Create HDFS folders and browse through HDFS from urls Hadoop Urls localhost:50070 – url of name node localhost:50060 – url of task tracker localhost:50030 – url for job tracker In Multinode env, :50070 – namenode :50030 – jobtracker :50060 – tast tracker jobtracker schedules jobs Hadoop Commands Create a data directory on hdfs hadoop fs -mkdir hdfs://<url>:8020/user/mytest1 (to verify go to url:50070) Copy data file form local to hdfs hadoop fs -copyFromLocal /home/dataset hdfs://localhost:8020/Data1/ Delete files from h

Hadoop Map Reduce – WordCount Program and Notes

Map Reduce Note – class room notes for Nov 28 Analysing Google Search Algorithm Lets take a e.g, if Google has to do indexing of all the say 1billion pages according to, how will it do it e.g lets say you search category search terms hadoop adoop training, hadoop syllabus, hadoop architecture orange orange shades, orange juice Google will categorize the pages and then sub categorize based on the relevant keywords, following the things it can look for word count text search frequency of occurences in that page Google changes its policies every 15days, so SEO companies have to keep optimizing the sites as per it, there are more than 95 algorithms on which google SEO can be based on. In general, we can say Google crawler scans billons of pages and organizes as per the relevancy category page rank word count location Categorization is the first important step, we need to categorize the page say as per hadoop, chicken. A page can belong to many categories

Hadoop Installation Overview – Part1 – Single Node

Hadoop Installation Overview – Single Node Part1 Overview This is the part1 of Hadoop installation document, We will be covering installing hadoop on a Single VM enviroment. Both namenode and datanode will run on the same VM. Assumption This document assumes that VMWare box is already installed on your machine, check below link to install install vmware workstation on vmware Install ubuntu 12.0.4 LTS, download the .iso file and install it. install ubuntu Installation Steps Step1 Install VMWare Workstation, as mentioned above Step2 Install ubuntu , as mentioned above. **Note – Make sure you select the 32bit or 64bit version based on your windows version Step3 Install OS Updates for ubuntu. Go to Terminal and type $ sudo apt-get update Step4 Install OpenJDK6. Since hadoop is all java we need the JDK. $apt-get install openjdk-6-jdk Step5 Install Eclipse. From UI go to software center and install eclipse Step6 Open SSH Server sudo apt-get

Hadoop Installation Overview – Part2 – Sudo Distribution

Hadoop installation Overview – Sudo Distribution Overview This is the part 2 of Hadoop installation document, in part1 we covered installing hadoop with NameNodes,DataNotes on a single VM enviroment. In Sudo Distribution, we will Install 2 instances of VM, where one machine will act as a NameNode+DataNode where as other will act as a DataNode only. Initial Setup Goal  – Create two VMs of Ubuntu 64 bit with each one having the hadoop setup as in part1 Steps Step1  – Start with the VM you already have in part1 Step2  – There are two ways we can clone the VM Full Clone – Creates the entire copy of the VM. Best way to create the first VM(which will act as the Namenode) Linked Clone – Does not create the full copy, but builds on top. Best way to creat the second datanode Step a. Create a FullClone of the VM we have from part1 and name it as the namenode. (to verify , open sudo gedit /etc/hostname ) Step b Crate a Linked Clone of the fullClone VM and name it as the data