Feeds:
Posts
Comments

Backports are those thing that are done every time. Even if you aren’t looking right now to it :P. Most distros does a lot of it.

What’s a backport? Backport is when you need to get code from, let’s say,  an upstream version and get it back to fix or fit in an old version than upstream one. Sometimes to do that is quite simply, if the patch’s piece that touches the code was not changed yet in upstream version or if was changed but in a minimum way. Otherwise sometimes you need to practically do a ‘surgery’ at the code to backport some pieces of the fixe. Things can become a nightmarry if you have a bunch of code to touch and a bunch of them were already changed in upstream. If you don’t have choice other than fix/backport I’ll share with you some of my steps that maybe help you, but I’m also open to hear from you what the methods you use in this situation.

Do backports often can be resume in these few steps:

  1. get the package code to backport
  2. get the patches to fix into backported version
  3. apply the patches
  4. build the package and pray to not see any errors/warnings or regression (here in the test phase)

For me things works like this…Once I get the package source I copy with another name: git-version and git init and  git add  * creating a git repository to handle with my patches. Sometimes the package has a git upstream, I also get this one. Here you probably wondering ‘why not just get the upstream git and jump to the branch or commit that match with the package’, well, sometimes it doesn’t happen, there is no match between packages on the distros and upstream. It happens because in distros it’s often to keep distro-version (let’s call this way).  Continue… with my git-version I start to apply the patches using git apply --check. It gives you a test check to see if the patch you want to apply really works or fails. If it works (yataaaaa! –  Japanese word for: uhuuuuuuuuu), if not (zanen desu ne! – Japanese word for: it’s a pity). Second scenario will request from you a lot of job sometimes, or just a few adjustments. But the case here is that if you use the git-version you can use things like gitgutter if you use vim and follow the changes you are doing at the moment you are typing – that’s really nice!. When I finish my re-patch or backport I often commit it as the same name of the patch I’m based on, this way things becomes more easy to track and checkout. Supposing everything was ok with the re-patch is time to generate the patch file: git format-patch -1 <sha-code> will do it for you. Now it’s just a question of apply against the original package source (quilt, dpatch, patchless …), build and check if everything goes fine 🙂

Some other tools and tips would be:

  • vimdiff: for show the diffs between two files in the same window, but…
  • diff: you can also use diff -Naur old new | less -S , and check your diff
Advertisements

If you play with debian packages you probably know what is quilt. This nice tool makes the life of any developer easy. But what about if you don’t have a debian package and still want to play using such magic?
It’s quite simple, believe me :).

First thing you need to do is, of course, install quilt, if it wasn’t already there.

  • sudo apt-get install quilt

Once you done now is time to see what it does for you. But before it let’s configure a simple .quiltrc (this one need to be there in your /home/username/.quiltrc,  OK). Into it add those lines:

QUILT_PATCHES=debian/patches # change debian/patches for patches, by example
QUILT_NO_DIFF_INDEX=1
QUILT_NO_DIFF_TIMESTAMPS=1
QUILT_REFRESH_ARGS="-p ab"
QUILT_DIFF_ARGS="--color=auto" # this is for colors

This file says basically that your patches will be in your patches folder or debian/patches if it’s debian package and that’s the basic you need to know, but if you want dig more in those variables just type man quilt on your terminal, the explanation will be there for sure.

In your source directory, if you already has a patch folder and want to apply the patches are there type quilt push -a everything will be applied.

push_quilt

If you want to check just type quilt applied.

applied_quilt

Here are the main commands you’ll need to know:

  • quilt new name_of_your_patch.patch: create a new patch ‘session’ with the passed name
  • quilt add src/somecode.c: after create a patch you need to add what files you’ll change in order to create in fact the patch.
  • quilt refresh: when you finish to edit those files add into your patch ‘quilted’ you do a refresh. If you cat patches/name_of_your_patch.patch you will see your nice patch there :).
  • quilt pop: it’s used if you want to take off a patch of the stack of patches to be applied, as you can see bellow it also restore the changes you did in the file.

pop_quilt

I think quilt is that kind of tool that make things magic to you. Imagine that you have a lot of patches to apply or a lot of backport changes to do and apply. Quilt gives to you a easy way to manager this task in a quite beautiful mode. And if you are thinking to use it only in your .deb, well I pretty sure you have fun using it in any project/source you dev.

References:

[1] https://wiki.debian.org/UsingQuilt

Yep, the title is exactly that you got. If you don’t have a raspberry PI (like me) your dreams to play with  ARM programming doesn’t end yet. I’m saying that because you can do everything just using your old friend x86_64.
How to do that? First let’s install some cross compile (gcc, in our case)

$ sudo apt-get install libc6-armel-cross libc6-dev-armel-cross
$ sudo apt-get install binutils-arm-linux-gnueabi
$ sudo apt-get install libncurses5-dev

Once you did that, type in your terminal: arm-linux-gnueabi-gcc  if everything goes fine it’ll be there. Only thing you need to do now is coding and that has no secret.

#include <stdio.h>

int main(void) {
    printf("Hello World!\n");

    return 0;
}

$ arm-linux-gnueabi-gcc -o hello hello.c

Now you compiled how to run it? The answers will be qemu-arm. Install it: sudo apt-get install qemu . After that try to run your ARM executable.

Everything goes OK? Probably no. It’s because your executable needs some shared libs, yep, it’s a pain. But you can fix it in two flavors: i) qemu-arm -L /usr/arm-linux-gnueabi/  hello; ii) recompiling passing a static flag: arm-linux-gnueabi-gcc -static -o hello hello.c , and so run qemu-arm hello. Between those two ways the first is the better, since you don’t want your executable with a giant size that is the result of all the lib that will be put into it. But for test cases you can do it anyway.  You can see other approaches here

In a scenario where you have a board you can just scp your code to there and run, in cases you haven’t, qemu-arm and cross compile can works exactly the way you need.

References:

[1] http://tuxthink.blogspot.com.br/2012/04/executing-arm-executable-in-x86-using.html
[2] https://www.acmesystems.it/arm9_toolchain
[3] https://stackoverflow.com/questions/16158994/qemu-arm-cant-run-arm-compiled-binary

These days I was just thinking. Sometimes when you build something for debian by example you have a nice .diff it generates in /temp, but how about all those builds or compiles you do and don’t have any diff to check if anything is different?
The case is that is very easy to check a log difference, you just need to run diff log1 log2. Ok, it’s easy, it’s nice, but if you want to play a little bit with python you can do something like  this:

import sys
from difflib import unified_diff

# first read the previous or original log
_log1 = None
with open('log1', 'r') as log1:
    _log1 = log1.read()

# now get the output log 'dynamically' from your program
log2 = sys.stdin.read()

# finally, call unified_diff for them
from line in unified_diff(_log1, log2, fromfile='log1.txt', tofile='log2.txt'):
    sys.stdout.write(line)

Now when you run this it’ll show you a nice unified diff format the differences between these log files.
I know it’s very silly, but still useful in some way 🙂

References:
[1] difflib

It’s not always that you’ll get something you want in your life. It’s up to you to keep fighting everyday. In this crazy tech world we have a lot of pathways and you don’t need to follow all them. Choice wisely one, put things in a note book, learn, learn every day.
Even if you failed in some interviews, keep learning something new everyday. The worst that can happens is you become less ignorant. Success happens even when you fail since through it you can fix everything wrong and try again, and again, and again…
Said that, maybe today is the time to learn more about ARM arch and assembly in ARM. Let’s diving in it? See the references you’ll need here  

Sometimes is quite interesting to measure the run time of a given snippet your code. In C++11 is very simple and in my case I’m using just like that.

In my .h file I define

#ifndef __PROFILER__H
#define __PROFILER__H

#include <chrono>

using namespace std::chrono;

high_resolution_clock::time_point start(void);
high_resolution_clock::time_point end(void);
void finished(high_resolution_clock::time_point, high_resolution_clock::time_point);

#endif

And in my .cpp file

#include <iostream>
#include "profiler.h"


high_resolution_clock::time_point start(void) 
{
    return high_resolution_clock::now();
}

high_resolution_clock::time_point end(void)
{
    return high_resolution_clock::now();
}

void finished(high_resolution_clock::time_point start, high_resolution_clock::time_point end)
{
    duration<double> elapse = end - start;
    std::cout << "Elapsed time for function " << __FUNCTION__ << " :"<< elapse.count() << "s" << std::endl;
}

In order to use the code you just need to add the code ‘surrounded’ by start and finished functions, passing end as parameter. Like that:

#include <iostream>

// It's not often that you'll profiling your code so just add a D flag
// See Makefile
#if PROFILER 
#include "profiler.h"
#endif

// Some function to measure
void initLoad(void) {
    cout << "Initializing data in memory..." << endl;

#if PROFILER
    high_resolution_clock::time_point s = start();
#endif

    for (int i = 0; i < 100000000000; i++)
        cout << i << endl;

#if PROFILER
    finished(s, end());
#endif
}

In your Makefile you can use a D flag to optionally compile your code to use the profiling, such as:

SRC = $(wildcard src/*.cpp)
PROG_NAME = test
INCLUDE = -Iinclude 
LIBS = -Ilibs
FLAGS = -std=c++0x
DFLAGS = -q -nx -tui
PROFILER ?= 0

# To use profiler just add: make PROFILER=1
ifeq ($(PROFILER), 1)
	FLAGS += -DPROFILER
endif

test:
	g++ -o $(PROG_NAME) $(SRC) $(INCLUDE) $(LIBS) $(FLAGS)

Now you can use this ‘lib’ for any source you want to measure the run time typing: make PROFILER=1 🙂

That’s all folks!

Screen tip

How about to use bash configuration in your screen session?
Well that is very simple to config. Just add this into your .screenrc

defshell -bash`

Since sometimes your bash in a screen session is not identify in your terminal, you can also add this info into your PS variable in .bashrc. Just adding something like this in the end of your .bashrc:


set_term_name() {
    current="screen"
    if [ $TERM=$current ];
    then
        echo $TERM
    fi
}
export PS1="\u@\h \[\033[32m\]\w[\033[33m\] \$(set_term_name)\[\033[00m\] $"

That’s all folks!