Tuesday, 14 October 2014

MySQL data sample

Hi,

so today I didn't find any (well at least without iterating through mysqldump) way to export a subset of a database tables so I wrote the following bash script:

partial-mysqldump
  1. usage() { echo "Usage: $0 [-h <host>] [-u <user>] [-p <pass>] [-d <database>] [-t [nametables]] [-f <output_file>] [-l <limit>]"1>&2exit 1}
  2. while getopts ":t:f:d:l:p:u:h" opt; do
  3.     case "${opt}" in
  4.         f)
  5.             f=${OPTARG}
  6.             ;;
  7.         d)
  8.             d=${OPTARG}
  9.             ;;
  10.         p)
  11.             p=${OPTARG}
  12.             ;;
  13.         h)
  14.             h=${OPTARG}
  15.             ;;
  16.         u)
  17.             u=${OPTARG}
  18.             ;;
  19.         l)
  20.             l=${OPTARG}
  21.             ;;
  22.         t)
  23.             t+=("$OPTARG")
  24.             ;;
  25.         *)
  26. echo
  27. echo "[ERROR]: Unknown, ignoring ${OPTARG}"
  28.             usage
  29.             ;;
  30.     esac
  31. done
  32. shift $((OPTIND-1))
  33. if [ -z "${h}" ]then
  34.     h='127.0.0.1'
  35. fi
  36. if [ -z "${l}" ]then
  37.     limit=''
  38. else
  39.     limit="where=\"1 limit $l\""
  40. fi
  41. if [ -z "${d}" ] || [ -z "${t}" ] || [ -z "${f}" ]then
  42.     usage
  43. else
  44.    for val in ${t[@]}do
  45.      mysqldump \
  46.        -h $h \
  47.        -u $u \
  48.        -p$p \
  49.        --no-create-info \
  50.        --databases $d \
  51.        --table $val \
  52.        --lock-tables=false \
  53.        $limit \
  54.      >> $f
  55.    done
  56.    gzip -9 -c $f > $f.gz; rm $f
  57. fi


so now, i can just limit my data and export what I want.

./partial-mysqldump -u <user> -h <host> -p <password> -d <database> -t <table1> <table2> <table3> -f test -l 1000

will dump first 1000 rows from table1, table2 and table3.

The problema with mysqldump --where="1 limit <yourlimit>" option is that it counts the overall result set and not a limit for each table.

Cheers
Tiago






Sunday, 12 October 2014

why stop using GNU screen?


I've been using GNU screen effectively for the last couple of years and although I've heard about tmux I've never considered changing. Well, basically for commodity purposes and because screen did the job.....

So, for those unfamiliar with these type of tool, tmux and screen are software apps that allow you to multiplex VT sessions.  Let's say they are window managers for text console.

more @:


Recently I've hit some limitations of screen and decided to try out tmux (which actually does a great job). 
Obviously I am still used to screen interface and commands, so the path to get a more "proficient" level with tmux is going to be evolutive as it always is.

I found an interesting comparison blog post about both tools here: 

Cheat sheet
tmux < -- to --> screen :

Wednesday, 8 October 2014

Productivity boost

So, how can I boost my productivity today?

let's try some bash shortcut with "fasd"

Fasd (pronounced similar to "fast") is a command-line productivity booster. Fasd offers quick access to files and directories for POSIX shells. It is inspired by tools like autojumpz and v. Fasd keeps track of files and directories you have accessed, so that you can quickly reference them in the command line.


$ brew info fasd
fasd: stable 1.0.1
https://github.com/clvv/fasd

$ brew home fads

## INSTALL

$ brew install fasd
$ echo eval "\$(fasd --init auto)" >> ~/.bash_profile
$ source ~/.bash_profile
$ cd ~/Desktop/
$ cd -
$ a
$ z desk

see the point now?


just check https://github.com/clvv/fasd and a lot more will come from there






nodejs ORM


So, you're looking for a node.js ORM.... I'll just throw a bunch of links for now and further I'll improve the post. Enjoy your research

sequelizejs

http://sequelizejs.com
https://github.com/sequelize/sequelize

bookshelfjs

persistencejs

https://github.com/coresmart/persistencejs
http://zef.me/2774/persistence-js-an-asynchronous-javascript-orm-for-html5gears/

knex.js

node-orm2

Tuesday, 7 October 2014

RRD tools

So it's time to choose an RRDtool... wait, what does it mean RRD?

in http://en.wikipedia.org/wiki/RRDtool
RRDtool (acronym for round-robin database tool) aims to handle time-series data like network bandwidthtemperaturesCPU load, etc.

Here's some interesting presentation:

Friday, 29 April 2011

AWK - a simple tool for parsing files

Intro

Every once in a while, a programmer comes across the chalenge of collecting/parsing data from a structured file. For this, we need to parse the file with some tool/language. You could do this in any language, but let's see how it's done with awk.

So let's start explaining awk:


FACTS
  1. Programming language
  2. Simple and Fast
  3. Inspired Perl
  4. C syntax
  5. Ideal for precessing data files (ex: csv)

AWK file structure

BEGIN { code1 }
{ code2 }
END { code3 }

So:
  • code1 is ran before the actual parsing is initiated. It can be useful to initialize some variables, for example.
  • code2 is ran every time we parse a new line. Treat this as "what to do for each line".
  • code3 is ran in the end of the parsing. You can use this to process data collected at code2 and print it, for example.

Built-In variables

  • $0 -> The current line
  • $N -> The Nth element of the current line
  • NR -> Line number
  • NF -> Number of fields in the current line
  • FILENAME -> Name of the file being parsed
  • ...

Example 1

Consider this simple file "data.txt":

12
50
12
35
12
12
...

Theses values could be anything from grades, to how long did a process take to do a transaction, to a size of data we're saving to disk in every process, etc.

Imagine you wanted the sum and average of these numbers, you could simply write this in the console:

~$ awk '{ s += $1 } END { print "sum: ", s, " average: ", s/NR }' data.txt

You don't even have to write a source code file to do this, simple write directly you're code in apostrophes. Of course, if what you want to do is a little more complex, this is troublesome. Then you could write the code in a file, as we see in the next example.


Example 2

Consider this simple file grades.txt:

name number grade1 grade2
rei 666 20 20
deus 876 17 15
norad 555 5 9
mbp 000 0 0

We have a simple file with a students' name, number and grades.

Let's calculate the grade's average and save in a new file:



1:  BEGIN {} 
2: {
3: if(NR != 0) { # skip the 1st line
4: grades[0] += $3;
5: grades[1] += $4;
6: }
7: }
8: END {
9: # a space between strings concatenates them
10: name = FILENAME "_parsed.txt";
11: # the comma concatenates strings when printing
12: print "Average grade 1: ", grades[0]/(NR-1) >> name;
13: print "Average grade 2: ", grades[1]/(NR-1) >> name;
14: }

So this is the file parse.awk. To run it, simple type in the console:

~$ awk -f parse.awk grades.txt


Conclusion

There it is, two basic examples to get you started with AWK.
You can do many more things like using loops, etc.

The best way to learn more is to have an actual real case problem, so the next time you have to parse a file and do fairly basic stuff with it, give AWK a try.


Friday, 4 March 2011

Yet Another Event-driven Post

If you follow us, you have certainly caught our previous post (from Miguel) about Libevent. I must say, Libevent seems to be really cool, but it is still C. And since only a few of us like C, and it almost forces us to use threads (which means more resource consumption and more complexity) to perform what prove to be simple tasks on more high-level languages, how come we can do all these things more easily?

So how about leaving Libevent to the Gurus behind database and filesystem access driver libraries development, and focus on a single-threaded powerful non-blocking event loop programming style?

Welcome to the wonderful world of Node.js!

At a glance, Node is a JavaScript server-side programming environment (framework style) that provides the ability to handle server requests and responses, be it HTTP or raw TCP, with a seamless event-driven approach with those JavaScriptish callbacks leveraging a non-blocking I/O event loop. Being JS-based, Node also inherits all the document processing tools for the client-browser-app's DOM, and a bunch of other cool stuff that allows you to develop kick-ass web applications using Javascript sometimes all over the stack (hello MongoDB!!).

To show you how painless Node can be, here is the implementation of the infamous chat server self-learning example:
1:  // this is how you load the "net" module (that encompasses the TCP utilities)
2: // it's actually better to assign it to some variable and use it throughout the code
3: net = require('net');
4:
5: // connected sockets pool
6: pool = [];
7:
8: // create a TCP server instance (using the "net" module)
9: server = net.createServer(function(socket) {
10: // add client socket to pool
11: pool.push(socket);
12: // listen on client's socket for incoming data
13: socket.on('data', function(content) {
14: for(var i = 0; i < pool.length; i++) {
15: // send message to all clients
16: message = socket.remoteAddress + ' > ' + content
17: pool[i].write(message);
18: }
19: });
20: // remove inactive sockets from pool
21: socket.on('end', function() {
22: var i = pool.indexOf(socket);
23: pool.splice(i, 1);
24: });
25: });
26:
27: // run server on port 8000 (or other)
28: server.listen('8000');

There are roughly 20 lines of code, and it is actually readable!! Are you serious?!?! Goodbye Erlang and other funky stuff (just kidding here).

I am also just entering this wonderful world, so there is not much more I can say to you about it. So why don't you check the links below for more info?


Feel free to leave more interesting resources in your comments (please do). Nonetheless, we will be tracking Node's evolution closely here, it's really promising stuff.

Kudos to Ryan Dahl, the man that made it all possible.