Wednesday, December 5, 2012

Lecture notes for the design of the std::bind replace

Normally when I blog its for myself, this isn't profession programming, this is hack and slash programming at its best. I normally wander around a topic until I get board of it. Blog what I complete and move on.. I have hundreds of half written posts, incomplete code fragments, scratch notes and todo'ed ideas.. eh maybe someday ill get to them... And my blog posts are normally min on the explanation and max on the code rampage...

Unfortunately sooner or later you get asked to present something would form a logically concise and worth while presentation. and somehow the excuse "But Im not anything more that am just another com sci nut with a blog and to much free time.." doesn't work...

So here is the are the notes from my presentation on the std::bind replacement(in a non-ADHD form)

The conversion of current std::bind implementations into movable form and lessons learned in the process.


Intro


As the old saying goes it is often the journey that matters and not the destination. In this generation we tend to get distracted easily and fail to understand the point of taking the harder route since the destination is the same anyway. Engineering and Science is one of those "arts" that often benefit from taking the harder route. Make no mistake engineering and science is an art.. it can be the industrial bare bones application(like a 1970 "mobile" phone) or something as refined as an iPhone. The key difference is the engineer that created the piece of work, his know-how and his skill at applying it. By reinventing the wheel once in a while we engineers can often pick up on the original basis of the problem and the foundation of solution and with this new insight and knowledge read into the mind of the original designers, metaphorically pick his brain and re-imagine the solution into one suitable for a comparable problem.

Which is what i have done in a series of blog posts here

The problem

The Issue at hand: Currently std::bind and Lamdba’s don’t seem to be move compatible. The basic problem to we are discussing here is a next step logical step to perfect forwarding. That is a "Delayed perfect forwarding" or “Indirect perfect forwarding”. The idea of "perfect forwarding" is to determine if the object is moving or not and pass it on as such. "Delayed perfect forwarding" would be the idea performing a perfect forward move through to function in a delay lazy calling content. This effectively means “binding ”. Eg we move the function and current contexts data into the bind capture and move it out of the capture context at the actual moment of execution and fill in the last minute details. Current Lambda disallows this:
std::function<void ()> && evil_move_capture()
{
  std::thread t(thread_core);
  std::function<void ()> func = 
  [std::move(t)()]->void 
  {
    std::cout << "Joining the thread....\n";
    t.join();
  };

  func();
}
Resulting in these compile issues:
lambda_move.cpp: In function ' std::function<void ()> && evil_move_capture()':

lambda_move.cpp:17:6: error: capture of non-variable 'std'

<built-in>:0:0: note: 'namespace std { }' declared here
You can of course move on execution... but that misses the point of the capture vs parameters
std::function<void ()> && evil_create_func()
{
  std::thread t(thread_core);

  std::function<void ()> func = 
  [&]()->void 
  {
    std::thread t0(std::move(t));
    std::cout << "Joining the thread....\n";
    t0.join();
  };

  //works here but after return! 
  //func();

  return std::move(func);
}
And of course you can’t seem to do it with the current bind implementation. (Although it has been suggested to me that this a mistake in the compilers/libraries implementation and not the new standard)
std::function<void ()>&& evil_bind_capture()
{
  std::thread t(thread_core);
  //to check that its going to work normally
  //bind_f(std::move(t));

  auto func = std::bind(bind_f, std::move(t));
  func();
}
…/c++/4.7.0/functional:1206:35: error: cannot bind 'std::thread' lvalue to std::thread&&'

The solution

The simplest solution is to move into the capture context and then to always move out of it. Of course this misses some of the more subtle points but who cares.

Step 1: Create an inspectable class for the Captured data.

The basic reproduction of a tuple. std::tuple is the normal std declared by member object... reusing this idea will allow you to create more complex member inspection and self-documenting/self traiting class. Ie Im just doing this so that I can expand its functionally after the fact.. Points to note
  • This is a normal recursive implementation of a varaidic template. Creating an inspectable self-declaring object that is capable of known what its members are.
    • It removes a single param out of the list in params and using it as the main member of the struct and deriving from the recursive construct of the remaining.
  • The constructor accepts its params via std::move and performs perfect forwarding on them into the local storage.
The simplistic version is;
template <typename A, typename ...Args>
struct Obj : public Obj<Args...>
{};
The actual application is:
//The general case
template <typename A, typename ...Args>
struct MoveCapture : public MoveCapture<Args...>
{
  A a_;

  MoveCapture(A&& a, Args&& ...args) :
    MoveCapture<Args...>(std::forward<Args>(args)...),
    a_(std::forward<A>(a))
  {}
};

//The terminal case
template <typename A>
struct MoveCapture<A>
{
  A a_;

  MoveCapture(A&& a) :
    a_(std::forward<A>(a))
  {}
};

Step 2: Expand the Capture class for retrieval

Now this stores the values. The problem of course is how to then retrieve them. This is the where the tuple rebuild comes into play. Points to note.
  • This is one of the another interesting variadic template techniques. Lets dub it "Parameter induction"
    • This is the idea of appending parameters to a list of variadic parameters instead of reducing it.
    • The control mech for this is of course the original reduction that is used when creating the object in the first place. The key difference is that the inducted list of outgoing objects does not have to match the original input list of objects.
Parameter induction. The simplistic version is;
Template <...I>
Control<R,RR....>::induct(I ....i)
{
  R r;
  Control<RR...>::induct(i..., convert(r));
};
The actual application is:
//The general case
template <typename A, typename ...Args>
struct MoveCapture : public MoveCapture<Args...>
{
  A a_;
  ...

  template <typename Func, typename ...InArgs>
  void call(Func& f, InArgs && ...args)
  {
    MoveCapture<Args...>::call(f,
    std::forward(args)...,
    std::move(a_));
  }
  ...
};

//The terminal case
template <typename A>
struct MoveCapture<A>
{
  A a_;
  ...

  template <typename Func, typename ...InArgs>
  void call(Func& f, InArgs && ...args)
  {
    f(std::forward<A>(args)...,
      std::move<A>(a_));
  }
  ...
};

Step 3a: Handling the early Capture of Placeholders

Now we can get the data in and out... But the next problem in the design is how do you handle the difference between a placeholder and the actual data being captured. Primary this just means that the placeholder type and object will appear in the parameter list. So in theory all you have to do is specialize the object to the placeholder as such Points to note:
  • This might on the surface seem simple but this one of more tricky items to make work
  • Pay special care to the “const” when using std::moved item. This is a nasty little trick that allows the const to slip around resulting in massive compiler errors and are very difficult to figure out.
Simplest usage
Template <R,…RR>
struct A {};

Template <special,…RR>
struct A<recreate<special>::type, RR...> {};
The actual application is:
template <int I, typename ...Args>
struct MoveCapture<const std::_Placeholder<I>, Args...> : public MoveCapture<Args...>
{};

Step 3b: Handling the resolution of early Captured and later passed Params

Further more there is a yet another catch 22 when you try to actually resolve out placeholders. You end up having 2 variable param lists to handle and the c++11 spec allows for only 1. So here is another interesting technique, variadic template chaining. Points to note:
  • Since the spec allows 1 list the only way to do this is via a list separator.. in this case that just happens to be a "struct" or "class" object.
  • It is probably wise to end this in a function call for reasons ill explain latter.
The simplistic form.
A<...X>::B<....Y>::C<...Z>::call<>(D...)
The actual application: Additional Points to note:
  • In this design the application of it is actually early. This one is in the constructor of the binder so that the objects are handed to the binder(possibly containing placeholders) and the objects forwarded on to the end call can differ in types.
template <typename ...P>
struct MovableSetup
{
  typedef void (*Func)(P&&...);

  template <typename ...H>
  static MovableBinder<Func,MoveCapture<H...>> setup(Func f, H ...h)
  {
    typedef MoveCapture<H...> Capture;
    Capture* capture =  new MoveCapture<H...>(std::forward<H>(h)...);

    return MovableBinder<Func,Capture>(f,capture);
  }
};

Step 4: Something for the future. A Compiler param deductor

Now the final lesson to learn(but not applied) is that there is a slight difference between the class or struct usage of the varadic template and the function usage of the varadic templates. This creates a sizable difference in the end usage and more importantly the ability of the usage. Points to note
  • In the case of functions they have the capability of deducing the types from the instances and forwarding that into the template. In contrast with as the struct or class type there is no instance of variables so it cant guess the type you have to tell it explicitly.
    • This means that if you wish to create type deducting systems from instantiated objects you will need to create a function wrapper to make it work auto-magically
  • This is why I suggested that you end your chained variadic templates with a function call so that it can deduce as much as possible and save you and the end user of the code the effort.
In the simplistic form its:
struct A<P....>
{};

A<....P> maker(P....p) { return new A<P…>(p…); }

Wednesday, November 21, 2012

movable std::bind with placeholders

And finally the placement compatible version of the movable binder.. from 2012/11/perfect-forwarding-bind-compatable-with.html. this is again using threads to show that it works as with move only objects.

Also this probably needs gcc to work as it has a special placeholder classes call std::_Placeholder<>. Also I had to work around a compiler bug that kills the variadic templates nasty little monster that one..

And I still havent done anything about the problem with copying the binder

Ok this code probably needs a bit of explanation
There are several parts to a binder/lambda
  1. The "Binder Object" this is the main body of the binder. in the following code its called "MovableBinder"
  2. The "Bound function" this is the function that is to be executed in at the later point. In the code below its the typedef or template param called "Func"
  3. The "Closure" this is the part of the binder that captures variables in the context that the binder was created in. In the code below its called "MoveCapture"
  4. The "Placeholder" these are placement markers that allow the parameters that are handed in to be jumbled around for the binding interface. In the below code i attempted to use standard std::placeholders. It hasnt really worked out so great.
  5. The "Parameters" these are the params passed to the bind at the point of its execution. In the following code they are cached into the "MoveHolder" class.
This main core of this code design is the MoveCapture.

On Construction it recurses down the varable param decimates the list to an aggregate object that caches 1 param "A" or the placeholder on each step.

On "call" it again recurses down the MoveCapture structure again but this time its its doing the exact opposite and inductively constructing a list of params into the variadic param list called "InArgs".. If it strikes a placeholder it simple retrieves that entry from the MoveHolders object. By the end of structure it has recreated the full param list for the bound function call.


#include <iostream>
#include <memory>
#include <utility>
#include <tuple>
#include <thread>
#include <functional>
#include <string>

template <typename ...H>
struct MoveHolder 
{
  typedef std::tuple<H...>  Tuple;
  Tuple values_;
  
  MoveHolder(H&& ...h) :
    values_(std::forward<H>(h)...)
  {}

  template <int I>
  typename std::tuple_element<I, Tuple>::type&& access()
  {
    return std::move(std::get<I>(std::move(values_)));
  }
};


template <typename A, typename ...Args> 
struct MoveCapture : public MoveCapture<Args...>
{
  A a_;

  MoveCapture(A&& a, Args&& ...args) :
    MoveCapture<Args...>(std::forward<Args>(args)...),
    a_(std::forward<A>(a))
  {
    std::cout << "DEBUG " << __LINE__ << "\n";
  }

  template <typename Func,
     typename H,
       typename ...InArgs>
  void call(Func& f,
       H&& holders,
       InArgs&& ...args)
  {
    std::cout << "DEBUG " << __LINE__ << "\n";

    MoveCapture<Args...>::call(f, 
          holders,
          std::forward<InArgs>(args)...,
          std::move(a_));
  }
};

template <typename A> 
struct MoveCapture<A>
{
  A a_;

  MoveCapture(A&& a) :
    a_(std::forward<A>(a))
  {
    std::cout << "DEBUG " << __LINE__ << "\n";
  }

  template <typename Func,
     typename H,
       typename ...InArgs>
  void call(Func& f,
       H& holders,
       InArgs&& ...args)
  {
    std::cout << "DEBUG " << __LINE__ << "\n";

    f(std::forward<InArgs>(args)...,
      std::move(a_));
  }
};

template <int I, typename ...Args>
struct MoveCapture<const std::_Placeholder<I>, Args...> : public MoveCapture<Args...>
{
  enum { Index = I };
  
  MoveCapture(const std::_Placeholder<I>& h, Args&& ...args) :
    MoveCapture<Args...>(std::forward<Args>(args)...)
  {
    std::cout << "DEBUG " << __LINE__ << "\n";
  }

  //WARNING complier BUG!
  // COMMENTING THIS WILL CAUSE "call" to fail build
  template <typename H>
  void access(H& holders)
  {
    std::cout << I << "\n";
  }
  //END BUG

  template <typename Func,
     typename H,
       typename ...InArgs>
  void call(Func& f,
       H& holders,
       InArgs&& ...args)
  {
    std::cout << "DEBUG " << __LINE__ << "\n";

    MoveCapture<Args...>::call(f,
          holders,
          std::forward<InArgs>(args)...,
          std::move(holders.access<I-1>()));
  }
};

template <int I>
struct MoveCapture<const std::_Placeholder<I>>
{
  MoveCapture(const std::_Placeholder<I>& h) 
  {
    std::cout << "DEBUG " << __LINE__ << "\n";    
  }

  //WARNING complier BUG!
  // COMMENTING THIS WILL CAUSE "call" to fail build
  template <typename H>
  void access(H& holders)
  {
    std::cout << I << "\n";
  }
  //END BUG

  template <typename Func,
     typename H,
       typename ...InArgs>
  void call(Func& f,
       H& holders,
       InArgs&& ...args)
  {
    std::cout << "DEBUG " << __LINE__ << "\n";

    f(std::forward<InArgs>(args)...,
      std::move(holders.access<(I-1)>()));
  }
};

//template <typename ...P>
template <typename Func, typename Capture>
class MovableBinder
{
private:
  Func func_;
  Capture* capture_;

public:
  MovableBinder(Func func, Capture* cap) :
    func_(func),
    capture_(cap)
  {}

  ~MovableBinder()
  {
    if (capture_!= NULL)
      delete capture_ ;
  }

  template <typename ...V>
  void operator()(V&& ...v)
  {
    std::cout << "DEBUG " << __LINE__ << "\n";
    MoveHolder<V...> holders(std::forward<V>(v)...);
    capture_->call(func_, holders);
  }
};

template <typename ...P>
struct MovableSetup
{
  typedef void (*Func)(P&&...);

  template <typename ...H>
  static MovableBinder<Func,MoveCapture<H...>> setup(Func f, H ...h)
  {
    typedef MoveCapture<H...> Capture;

    std::cout << "DEBUG " << __LINE__ << "\n";
    Capture* capture =  new MoveCapture<H...>(std::forward<H>(h)...);
    return MovableBinder<Func,Capture>(f,capture);
  }
};

void hello(double&& done)
{
  std::cout << done << "\n";
}  

void hello2(double&& done1,
     double&& done2)
{
  std::cout << " part1:" << done1 
     << " part2:" << done2 
     << "\n";
}  

void thread_core() 
{
  std::this_thread::sleep_for(std::chrono::seconds(2));
}

void join_thread(std::thread&& t1)
{
  std::cout << "Joining the thread....\n";
  t1.join();
}

int main()
{
  std::cout << "******** MANUAL USAGE **********\n";
  
  MoveCapture<const std::_Placeholder<1> > capture(std::placeholders::_1);
  MoveHolder<double> param(1.23);
  std::cout << param.access<0>() << "\n";
  
  capture.call(hello, param);
  
  std::cout << "******** NORMAL USAGE *********\n";
  
  auto movable_binder_1a = MovableSetup<double>::setup<const std::_Placeholder<1> >(hello,std::placeholders::_1);
  movable_binder_1a(1.34);

  auto movable_binder_1b = MovableSetup<double>::setup(hello,3.14);
  movable_binder_1b();

  auto movable_binder_2a = 
    MovableSetup<double, double>::setup<const std::_Placeholder<1>,
       const std::_Placeholder<2>>
          (hello2, std::placeholders::_1, std::placeholders::_2);
  movable_binder_2a(4.56,1.37);

  auto movable_binder_2b = 
          MovableSetup<double, double>::setup<double,
           double>
          (hello2,492.54, 743.293);
  movable_binder_2b();

  auto movable_binder_2c = 
          MovableSetup<double, double>::setup<double,
       const std::_Placeholder<2>>
          (hello2,492.54, std::placeholders::_2);
  movable_binder_2c(4.56,1.37);  //trash param 1

  std::cout << "******** THREAD BIND USAGE *********\n";

  std::thread tc(thread_core);
  auto movable_binder_tc = 
    MovableSetup<std::thread>::setup(join_thread, std::move(tc));
  movable_binder_tc();

  std::thread td(thread_core);
  auto movable_binder_td = 
    MovableSetup<std::thread>::setup<const std::_Placeholder<1> >(join_thread, std::placeholders::_1);
  movable_binder_td(std::move(td));

}

Tuesday, November 13, 2012

perfect forwarding std::bind replacement compatable with std::move

And without any more messing around here is the generic version of the movable std::bind replacement.

  1. This version doesnt handle place holders yet.
  2. There is probably problems if you try to copy it
Update: I since added a placeholder version here
#include <iostream>
#include <memory>
#include <utility>
#include <thread>

//A helper class for storage and stitching params together
template <typename A, typename ...Args> 
struct MoveHelper : public MoveHelper<Args...>
{
  A a_;

  MoveHelper(A&& a, Args&& ...args) :
    MoveHelper<Args...>(std::forward<Args>(args)...),
    a_(std::forward<A>(a))
  {}

  template <typename Func, typename ...InArgs>
  void call(Func& f, InArgs && ...args)
  {
    MoveHelper<Args...>::call(f,
         std::forward(args)..., 
         std::move(a_));
  }
};

//The helpers terminal case
template <typename A> 
struct MoveHelper<A>
{
  A a_;

  MoveHelper(A&& a) :
    a_(std::forward<A>(a))
  {}

  template <typename Func, typename ...InArgs>
  void call(Func& f, InArgs && ...args)
  {
    f(std::forward<A>(args)...,
      std::forward<A>(a_));
  }
};

//the Main std::bind movable replacement
template <typename ...P>
class MovableBinder
{
  typedef void (*F)(P&&...);

private:
  F func_;
  MoveHelper<P...> help_;

public:
  MovableBinder(F func, P&& ...p) :
    func_(func),
    help_(std::forward<P>(p)...)
  {}

  MovableBinder(F func, P& ...p) :
    func_(func),
    help_(p...)
  {}
    
  ~MovableBinder()
  {}

  void operator()()
  {
    help_.call(func_);
  }
};

//And the test with threads.. the ultimate move only object...

void thread_core() {
  std::this_thread::sleep_for(std::chrono::seconds(2));
}

void join_thread(std::thread&& t1)
{
  std::cout << "Joining the thread....\n";
  t1.join();
}  

void join_threads(std::thread&& t1, std::thread&& t2)
{
  std::cout << "Joining the threads....\n";
  t1.join();
  t2.join();
}  

int main()
{
  std::thread ta(thread_core);
  std::thread tb(thread_core);
  MovableBinder<std::thread, std::thread> movable_binder_threads(&join_threads, std::move(ta),std::move(tb));

  movable_binder_threads();

  std::thread tc(thread_core);
  MovableBinder<std::thread> movable_binder_thread(&join_thread, std::move(tc));

  movable_binder_thread();

}

c++11 Variadic template expansion options

While hacking up this movable bind function I got distracted(as per usual) thinking about the range of possible variadic template expansions available to us.

I figure that there are 2 main choices to make when using the argument packed types.
  1. Argument expansion or Argument Recursion
  2. Direct or Indirect

Argument expansion is where you flatten out the argument with the .... and pass them on.
  • The direct form of this is to work the expansion on the Argument is self.
  • The indirect for is to use a helper function around the Argument and expand the function passing it each object.

Argument recursion is where you siphon off one or more arguments on each pass and call down into your templates for the next stage.

  • The direct form is to do this in your main struct or function
  • The indirect form of this is where you create an inner helper or core that handles the recursive part for you.

Heres a triplet of examples that show the main uses cases that I would expect to see.. I guess there might be more but nothing else comes to mind at the moment... 
#include <iostream>
#include <utility>

// Indirect recursive class expansion
template <typename Func, typename A, typename ...Args> 
struct CallForEachHelper
{
  static void call(Func & f, A && a, Args && ...args)
  {
    f(std::forward<A>(a));
    CallForEachHelper<Func, Args...>::call(f, args...);
  }
};

template <typename Func, typename A> 
struct CallForEachHelper<Func, A>
{
  static void call(Func & f, A && a)
  {
    f(std::forward<A>(a));
  }
};

template <typename Func, typename ...Args>
void CallForEachIndirect(Func & f, Args && ...args)
{
  CallForEachHelper<Func, Args...>::call(f, std::forward<Args>(args)...);
}

// Direct recursive function expansion
template <typename Func, typename Arg>
void CallForEachDirect(Func & f, Arg arg)
{
  f(std::forward<Arg>(arg));
}

template <typename Func, typename Arg, typename ...Args>
void CallForEachDirect(Func & f, Arg arg, Args && ...args)
{
  f(std::forward<Arg>(arg));
  CallForEachDirect<Func, Args...>(f, std::forward<Args>(args)...);
}

// direct non-recursive argument flattening
template <typename Func, typename ...Args>
void CallForAll(Func & f, Args && ...args)
{
  f(std::forward<Args>(args)...); //indirect argument flattening using std::forward
}

void foo(int a)
{
  std::cout << "a:" << a << "\n";
}

void foo3(int a, int b, int c)
{
  std::cout << "a:" << a 
     << " b:" << b
     << " c:" << c 
     << "\n";
}

int main()
{
  int a1=1;
  int a2=2;
  int a3=3;
    
  CallForEachIndirect(foo, a1, a2, a3);  // run foo on each item via helper class recursion
  CallForEachDirect(foo, a1, a2, a3);    // run foo on each item by direct recursion
  CallForAll(foo3, a1, a2, a3);          // run foo3 on all items in 1 shot.
}

Sunday, November 11, 2012

perfect forwarding std::bind - why is std::bind not compatable with std::move??

Update: I have since posted the generic form of this code over Here

This is another oddity in the new spec. the std::bind and lambda are not able to take std:move()'ed items. The crazy part is that the standards community went so far as to define a std::forward to achieve perfect forwarding and didn't even take the next logical step and create a perfect forwarding version of std::bind and lamba...

So I hacked up a perfect forwarding verion of std::bind. This version is the one parameter version of the perfect forwarding std::bind. Still working on the variadic template version for unlimited params.

#include <iostream>
#include <memory>
#include <utility>

class MoveOnlyObj
{
public:
  MoveOnlyObj(int val) :
  val_(val)
  {
    std::cout << "MoveOnlyObj() - " << val_ << "\n";
  }
  
  MoveOnlyObj(MoveOnlyObj&& obj) :
  val_(obj.val_)
  {
    std::cout << "MoveOnlyObj(&&) - " << val_ << "\n";
    obj.val_ = 0; //0 it to make it very visable
  }

  ~MoveOnlyObj()
  {
    std::cout << "~MoveOnlyObj - " << val_ << "\n";
  }

private:
  friend std::ostream& operator<<(std::ostream& out, const MoveOnlyObj& o);
  MoveOnlyObj(MoveOnlyObj& obj);
  
  int val_;
};

std::ostream& operator<<(std::ostream& out, const MoveOnlyObj& o)
{
  out << o.val_;
  return out;
}

  
//lets start with the basic
template <typename P>
class MovableBinder1
{
  typedef void (*F)(P&&);

private:
  F func_;
  P p0_;

public:
  MovableBinder1(F func, P&& p) :
    func_(func),
    p0_(std::forward<P>(p))
  {
    std::cout << "Moved" << p0_ << "\n";
  }

  MovableBinder1(F func, P& p) :
    func_(func),
    p0_(p)
  {
    std::cout << "Copied" << p0_ << "\n";
  }
    
  ~MovableBinder1()
  {
    std::cout << "~MovableBinder1\n";
  }

  void operator()()
  {
    (*func_)(std::forward<P>(p0_));
  }
};

void test_func(int&& i)
{
  std::cout << "test_func: " << i << "\n";
}

void move_func(MoveOnlyObj&& i)
{
  MoveOnlyObj taker(std::move(i));
  std::cout << "move_func: " << taker << "\n";
}

int main()
{
  MovableBinder1<int> movable_binder_1_rvalue(&test_func, 3);
  movable_binder_1_rvalue();

  int i=4;
  MovableBinder1<int> movable_binder_1_lvalue(&test_func, i);
  movable_binder_1_lvalue();

  MoveOnlyObj m(5);
  MovableBinder1<MoveOnlyObj> movable_binder_move_only(&move_func, std::move(m));
  movable_binder_move_only();
}

And the output looks like this:
Moved3
test_func: 3
Copied4
test_func: 4
MoveOnlyObj() - 5
MoveOnlyObj(&&) - 5
Moved5
MoveOnlyObj(&&) - 5
move_func: 5
~MoveOnlyObj - 5
~MovableBinder1
~MoveOnlyObj - 0
~MoveOnlyObj - 0
~MovableBinder1
~MovableBinder1


And here is the gripper... This one passes a thread in as the parameter..
#include <iostream>
#include <memory>
#include <utility>
#include <thread>

//lets start with the basic
template <typename P>
class MovableBinder1
{
  typedef void (*F)(P&&);

private:
  F func_;
  P p0_;

public:
  MovableBinder1(F func, P&& p) :
    func_(func),
    p0_(std::forward<P>(p))
  {}

  MovableBinder1(F func, P& p) :
    func_(func),
    p0_(p)
  {}
    
  ~MovableBinder1()
  {}

  void operator()()
  {
    (*func_)(std::forward<P>(p0_));
  }
};

void thread_core() 
{
  std::this_thread::sleep_for(std::chrono::seconds(3));
}

void join_thread(std::thread&& t)
{
  std::cout << "Joining the thread....\n";
  t.join();
}  

int main()
{
  std::thread t(thread_core);
  MovableBinder1<std::thread> movable_binder_thread(&join_thread, std::move(t));

  movable_binder_thread();
}

Thursday, November 8, 2012

What is std::async ?

The new C++0x standard has provided many new and powerful tools for a c++ coders arsenal. Unfortunately I have already started to see several cases of copy the book coding. It is all to easy to forget that these tools are just slightly more complex constructs of the more fundamental units.

It is often a very enlightening experience to reinvent the wheel and understand what the original creators had to go through. Rather than just being another just another fool "standing on the shoulders of giants". Just another one of the new generation of school kids that cant add numbers together without a calculator.

So on that note i got curious about how the std::async works. On the surface its seems rather simple, but to actually create it I took several wrong turns. and learned a bit on the way.

There are several lessons that got me here:
* Look carefully at the the Lambda(and also a std::bind) and note that it cant take move schematics. As a result the promise is on the heap and a pointer...
* Note that my imatate_async requires the template param to be filled in. I have missed some trick that the std::async has that allows it to deduce the actual return type of the function.
* Note that this version doesn't account for parameters being passed to the thread function... Im lazy get over it.
* Note the exception catch a generic ... with a std:: function to pull down the actual throw. otherwise you up for std::terminate killing your app when that unexpected throw comes out.. its a bonus that this is a nice way to pass the exception to a different stack.

#include <iostream>
#include <thread>
#include <future>

// thread vs async.

int test_func() 
{
  return 1; 
}

template< class R >
std::future<R> async_imitate(std::function<R(void)> func)
{ 
  std::promise<R>* p = new std::promise<R>();

  std::function<void(void)> wrapper = 
    [p,func]()->void
    {
      try
 {
   p->set_value(func());
 }
      catch(...)
 {
   p->set_exception(std::current_exception());
 }
      delete p;
    };

  std::thread t(wrapper);
  t.detach();
  return p->get_future();
}

void run_thread()
{
  std::cout << "start run_thread\n";
  
  std::future<int> ret = async_imitate<int>(&test_func);
  int i = ret.get();
  std::cout << "run_thread: " << i << "\n";
}

void run_async()
{
  std::cout << "start run_async\n";

  std::future<int> ret = std::async(&test_func);
  int i = ret.get();
  std::cout << "run_async: " << i << "\n";
}

int main()
{
  run_async();
  run_thread();
}

Monday, October 22, 2012

c++11 universal init vs tuples.

While messing around with tuples I noted a rather large headache with its array initialization. If im not mistaken the problem is caused by the choice to make the constructor explicit. Basically this will ultimately force the user to typedef the tuple and then explicitly construct each of the lines in the array init.


#include <iostream>
#include <tuple>

int array[2]{1,2};

struct TableEntryC
{
  int         a;
  double      b;
  const char* c;
};

TableEntryC tableC[2]
{
  {1,1.1,"test"},
  {2,2.2,"try"}
};

typedef std::tuple<int,double,std::string> TableEntry;

TableEntry entry{ 1, 1.1, "test" };

TableEntry table[2]
{
  TableEntry{1,1.1,"test"},
  TableEntry{2,2.2,"try" }
};

// bad bad bad...
//TableEntry bad_table[2]
//{
//  {1,1.1,"test"},
//  {2,2.2,"try" }
//};


int main()
{
  std::cout << std::get<1>(entry) << std::endl;
  std::cout << std::get<2>(table[1]) << std::endl;
}

Monday, October 1, 2012

c++11 tuples and schema generation

Thanks to the new c++11 its finally possible create a system which can take in generic data structures and convert them automatically into the SQL schema and commands.

To do this you simply have to use the tuple as the class container(which you can typedef however its pleases you) and create the Database Adapter with a few of variadic templates that auto generate the SQL commands for data handling. Here is an example of generating the CREATE TABLE command for an sql-lite database adapter.

#include <iostream>
#include <string>
#include <sstream>

template <typename T>
std::string to_string(T t)
{
  //hmm this is missing!
  std::stringstream ss;
  ss << t;
  return ss.str();
}

template <typename I>
struct SchemaType
{
};

template <>
struct SchemaType<int>
{
  static constexpr const char* type = "INTEGER";
};

template <>
struct SchemaType<float>
{
  static constexpr const char* type = "REAL";
};

template <>
struct SchemaType<char*>
{
  static constexpr const char* type = "TEXT";
};

//CREATE TABLE t(x INTEGER, y, z, PRIMARY KEY(x DESC));
template <typename... II>
struct AutoSchemaCore;

template <typename I, typename... II>
struct AutoSchemaCore<I, II...>
{
  static std::string create_params()
  {
    std::string ret = "a"
      + to_string(sizeof...(II))
      + " "
      + SchemaType<I>::type;
    if (sizeof...(II) > 0)
      {
     ret += ","
   + AutoSchemaCore<II...>::create_params();
      }
    return ret;
  }
};

template <>
struct AutoSchemaCore<>
{
  static std::string create_params()
  { return ""; }
};

template <typename I, typename... II>
struct AutoSchema
{
  static std::string create_table(const char* tablename)
  {
    // CREATE TABLE t(x INTEGER, y, z, PRIMARY KEY(x DESC));
    return std::string("CREATE TABLE ") 
      + tablename
      + "("
      + AutoSchemaCore<I,II...>::create_params()
      + ");";
  }
};

int main()
{
  //generate the CREATE TABLE t(...) SQL command 
  std::cout << AutoSchema<float>::create_table("tableA") << "\n";
  std::cout << AutoSchema<int,char*,float>::create_table("tableB") << "\n";
}

Result
CREATE TABLE tableA(a0 REAL);
CREATE TABLE tableB(a2 INTEGER,a1 TEXT,a0 REAL);

Thursday, September 27, 2012

c++11 Variadic template

Another interesting feature of c++11 is the variadic templates. Basically these allow you to add infinite, heterogeneous parameters to your functions. However their syntax seems to force you to define them recursively. As a result they seem to be usable in 2 main ways.

  1. Map: Apply an repeated operation over the entire list of objects
  2. Reduce(or fold): Take in the list of objects and compound them into something
How very Hadoop. This apparent limited usability makes them a prime candidate for reduction in a forwarding function. Something that simply recurses the list of parameters and forwards them through a normaliser into a Lambda or directly into a templated function that can handle the specific Type. Here is the Normalize and Lambda approach.
#include <iostream>
#include <sstream>
#include <functional>

template <typename R, typename I>
R normalise(I i)
{
  try
    {
      R item;
      std::stringstream s;
      s << i;
      s >> item;
      return item;
    }
  catch(...)
    {}

  return R();
}

template <typename R, typename I>
R reduce(std::function<R (const R&, const R&)> action, I i)
{
  return normalise<R>(i);
}

template <typename R, typename I, typename... II>
R reduce(std::function<R (const R&, const R&)> action, I i, II... ii)
{
  return action(normalise<R>(i), reduce<R>(action, ii...));
}

template <typename R, typename I>
R map(std::function<void (const R&)> action, I i)
{
  action(normalise<R>(i));
}

template <typename R, typename I, typename... II>
void map(std::function<void (const R&)> action, I i, II... ii)
{
  action(normalise<R>(i));
  map<R>(action, ii...);
}

int main()
{ 
  std::function<int (const int&, const int&)> max
    = [](const int& a, const int& b)->int { return a>b ? a : b;  };  

  std::function<float (const float&, const float&)> min
    = [](const float& a, const float& b)->float { return a<b ? a : b;  };  

  std::function<float (const float&, const float&)> sum
    = [](const float& a, const float& b)->float { return a+b;  };  

  std::cout << "Max:" << reduce(max, "452", 3.422, 32, 0x000000ff, "543.485") << "\n";
  std::cout << "Min:" << reduce(min, "452", 3.422, 32, 0x000000ff, "543.485") << "\n";
  std::cout << "Sum:" << reduce(sum, "452", 3.422, 32, 0x000000ff, "543.485") << "\n";

  float res = 0;
  std::function<void (const float&)> accum
    = [&res](const float& a) { res += a;  };  

  map(accum, "452", 3.422, 32, 0x000000ff, "543.485");
  
  std::cout << "\n" << "Accum:" << res << "\n";

  std::stringstream ss;
  std::function<void (const float&)> stream = [&ss](const float& a) { ss << a << ",";  };

  map(stream, "452", 3.422, 32, 0x000000ff, "543.485");
  std::cout << ss.str() << "\n";
}

Tablified Traits

Here is simple c++ trait wrapper that goes around the old standard c++ array of structs lookup table. It handles the forward and reverse lookup of the trait entry via its main key and an attrib and then the access of the various attributes from the resulting row.

There are some caveats to it. The Enum id cant be sparse. and the UNKNOWN entry needs to be second last before the MAX entry marker.

#include <iostream>

template<int LAST, typename R, typename T>
R lookup(T* table, R T::*member, int key) 
{
  if (key < LAST) return table[key].*member;
  return table[LAST].*member;
}

template<int LAST, typename R, typename T>
int locate(T* table, R T::*member, R target) 
{
  int k = 0;
  while (k < LAST)
    {
      if (table[k].*member ==  target)
 return k;
      k++;
    }
  return LAST;
}

template<int LAST, typename T>
int locate(T* table, const char* T::*member, const char* target) 
{
  int k = 0;
  while (k < LAST)
    {
      if (std::string(table[k].*member) == std::string(target))
 return k;
      k++;
    }
  return LAST;
}

class TableTrait
{
public:
  enum Key
    {
      ITEM1,
      ITEM2,
      ITEM3,
      ITEM4,
      ITEM5,
      UNKNOWN, //Must be second last
      MAX      //Must be last
    };    

private:  
  struct TableEntry
  {
    Key         key;
    int         attribA;
    bool        attribB;
    const char* attribC;
  };

  static TableEntry lookup_table[MAX];

  int key_;
public:
  TableTrait(int key) :
    key_(lookup<UNKNOWN>(lookup_table, &TableEntry::key, key))
  {}

  TableTrait(const char* rev_key) :
    key_(locate<UNKNOWN>(lookup_table, &TableEntry::attribC, rev_key))
  {}

  bool        valid()   { return key_ != UNKNOWN; }
  Key         key()     { return static_cast<Key>(key_); }
  int         attribA() { return lookup<UNKNOWN>(lookup_table, &TableEntry::attribA, key_); }
  bool        attribB() { return lookup<UNKNOWN>(lookup_table, &TableEntry::attribB, key_); }
  const char* attribC() { return lookup<UNKNOWN>(lookup_table, &TableEntry::attribC, key_); }  
};

TableTrait::TableEntry TableTrait::lookup_table[TableTrait::MAX] =
{
  { ITEM1,   3, false, "ITEM1"   },
  { ITEM2,   2, true,  "ITEM2"   },
  { ITEM3,   6, false, "ITEM3"   },
  { ITEM4,   5, true,  "ITEM4"   },
  { ITEM5,   8, false, "ITEM5"   },
  { UNKNOWN, 0, false, "UNKNOWN" }
};

int main()
{
  std::cout << "lookup: " << TableTrait(TableTrait::ITEM1).attribC() << "\n";
  std::cout << "lookup: " << TableTrait(TableTrait::ITEM2).attribB() << "\n";
  std::cout << "locate: " << TableTrait("ITEM3"          ).attribA() << "\n";
  std::cout << "locate: " << TableTrait("ITEM2"          ).key()     << "\n";
  std::cout << "locate: " << TableTrait("Blah"           ).attribC() << "\n";
}

Friday, September 21, 2012

C++ Mixins and Curiously Recurring Templates

C++ Curiously Recurring and Mixin's

In large systems it is often desirable to have a policy or rule classes that handle a group of settings for particular instance of a data object. Generally the most common settings are chosen for the base class so that it becomes the "Default" policy. Other Policy's are added by over loading the various part of the Rules class to make new policies such as "DefaultWithA" and "DefaultWithB". But then someone will note the need for a "DefaultWithAandB"

At this point coders either cut and paste. Or try to break the Main rule object into sub-grouping objects "PolicyGroupA" and "PolicyGroupB" and convert the the main Rule into an interface that just aggreatates the tree of sub rules objects. This slowly crags up the rule checking with a series wrapper function calls just to get to the final cluster of rules. Furthermore its often rather difficult to divide these objects in a sane way because they have some form of relationship that caused them to be grouped at the start anyway.

Ruby provides an interesting addition to its language to handle common functionally and code grouping called a mixin. Rubys mixin is just another inheritance trick and once you realize what it is its easy to repeat it in C++. Simply put works out as a near brother of the curiously reoccurring templates. here is how it works;

#include <iostream>

class Default
{
public:
  virtual void ruleA() { std::cout << "default::ruleA\n"; }
  virtual void ruleB() { std::cout << "default::ruleB\n"; }
  virtual void ruleC() { std::cout << "default::ruleC\n"; }
};

class Impl1 : public Default
{
public:
  virtual void ruleC() { std::cout << "Impl1::ruleC\n"; }
};

template <typename T>
class MixinA : public T
{
public:
  virtual void ruleA() { std::cout << "mixin::ruleA\n"; }
};

template <typename T>
class MixinB : public T
{
public:
  virtual void ruleB() { std::cout << "mixin::ruleB\n"; }
};

class Unrelated
{
};

class DefaultWithMixinA : public MixinA<Default >
{
};


class DefaultWithMixinAandB : public MixinB< MixinA< Default > >
{
};

class Impl1WithMixinA : public MixinA<Impl1>
{
};

class UnrelatedWithMixinA : public MixinA<Unrelated>
{
};

int main()
{
  std::cout << "DefaultWithMixinA\n";
  DefaultWithMixinA d;
  d.ruleA();
  d.ruleB();
  d.ruleC();
  
  std::cout << "DefaultWithMixinAandB\n";
  DefaultWithMixinAandB ab;
  ab.ruleA();
  ab.ruleB();
  ab.ruleC();
  
  std::cout << "Impl1WithMixinA\n";
  Impl1WithMixinA i;
  i.ruleA();
  i.ruleB();
  i.ruleC();
  
  std::cout << "UnrelatedWithMixinA\n";
  UnrelatedWithMixinA u;
  u.ruleA();
}

Here is some code so that you can see the difference between this and a real curiously reoccurring template. Pay close attention to how it inherits

#include <iostream>

template <typename T>
struct CuriouslyRecurring
{
public:
  virtual void ruleA()  { std::cout << "CRP::ruleA\n"; }
};

template <typename T>
struct Mixin : public T
{
public:
  virtual void ruleA() { std::cout << "mixin::ruleA\n"; }
};

struct Default
{
public:
  virtual void ruleA() { std::cout << "default::ruleA\n"; }
};

struct DefaultWithMixin : public Mixin< Default >
{

};

struct SelfWithCRT : CuriouslyRecurring< SelfWithCRT >
{
};

// Impossible complier cant tell which ruleA to use when you call it
//struct DefaultWithCRT : public CuriouslyRecurring< DefaultWithCRT >, Default
//{
//};

// Impossible this is a MIXIN cant overload an incomplete class
//struct SelfMixin : public Mixin< SelfMixin >
//{
//};

int main()
{
  SelfWithCRT crt;  
  crt.ruleA();

  DefaultWithMixin mix;  
  mix.ruleA();

  //DefaultWithCRT dcrt;  
  //dcrt.ruleA();
}

Tuesday, July 24, 2012

Why inst your code 100% bug free, did you even test it?

In computer science there is a well known theory about testing and the intractability of obtaining 100% certain proof of bug free code. To summarize it "Testing can never prove that software is free of all bugs"

Explaining that it is impossible to do to the Layman and beings of lower standing such as high paid wild eyed, foaming members of upper management in tailored suits who think that they require a megaphone to be heard. Can be an entertaining experience..

Come to think of that have you ever had the experience of a ticked of manager screaming at you and all you can think about is how much he looks like 2 year old child having a tantrum in a store while the parent quietly ignores them and continues to shop... So you sit like the adult, waiting, counting the mans heart rate by the huge pulsing vain in his forehead.. Wondering the whole time if your going to need the AED from down the hall or not....

By now his rampage is pushing that last of the oxygen out of his system and he is starting to turn that deep shade of purple and you know that he has to take a breath soon so you get ready and as he sucks in that life giving air you shoot off "Oh.. i sent you an email about that 2 weeks back boss.. you did read it right?"

And then it suddenly dawns on him that the unread mails from you between his playboy subscription renewal and the face-book update messages might have been important... As the oxygen rushes back into his system his face turns a sudden bright red your are left eternally wondering if he actually blushed or if it was just his flesh re-oxygenating...

Ahh give the man a break. The truth is he doesn't understand crud...he knows it... we know it.

His high paid job is all about making promises about stuff that he doesnt fully understand, talking about the work that you are doing and listening to you dribble on about the big O complexity of the search functions... And try as he might to listen to you all he can think about is that hot blond from last weekend... So now he is pissed because he thinks you deliberately lead him to the cliff edge that he just fell off and he is trying to swim back up through the air like wile-e coyte...

The reality of the situation is that this is neither good for you or him. So re-educating the boss is the best course.. the way i seem to have had the most success explaining the inability of testing to find all bug is by a coin flip game.

Consider software to be a set of coins (2 or more). The running of software is a coin toss. And bugs are when all the coins show heads. SO testing the idea of tossing the coins until you get all heads and find a bug.

Now with this in mind, walk him through a "trivial piece of software" ie a game with 2 coins. Explain to himvthat the chance of finding a bug is 1/4 so it will take about 4 tries to find it. Yes i know "4 tries" is not really accurate but kept it simple alot of people don't get or care about the mathematical background to the average.

Then expand "a 3 coin piece of software" which has a 1/8th of a chance to find a bug and therefore it will take 8 tries to find a bug. Make certian to pound home that fact that the size of the program causes an exponential growth in the cost and time of testing.

Then you hit him with that fact that your program is really something that is thousands of coins big and it would be somewhat far out of his budget to have it all tested so that it is truly 100% bug free. And there you have the seed of knowledge.. at least until the boss drinks his next set of neurons into oblivion..

Later on you you can take this metaphor a bit further by explaining that
* directed testing is like weighting the coins to come out to a point that you think has bugs
* coverage and expensive coverage tools all are about recording outcomes of the coin toss so that we can tell what % of all the possible results we have seen and take a guess at when to really stop testing.
* etc etc...

Monday, June 11, 2012

C++11 lockless queues

I have been messing around with the new c++11 threading in order to write a post on it... as per usual i ended up side tracked on something else.. while coding it i got thinking about the lockless vs lock implementations. Basically lockless implementations require you to design to very hard restrictions. They are "write mastering" and "obstruction free updating"

Write-Mastering
Write Mastering is where a data variable is updated by a single thread. Its often surprising how easy this is to do. Eg for a Queue the head is mastered from the producer side and the tail is mastered from the consumer side. (this is the example below)

Obstruction-freedom
When write mastering is not possible an obstruction-free check can be used This technique doesn't explicitly lock a data structure but instead uses a pair of "consistency markers" that are updated the sequance of read, update check 1, copy, copy update, (reread)check, write back copy(or rollback), update check 2.

To fully explain it: When an update is planed the first consistency marker is compared to the second. If they dont match enter a spin lock until they do. If/When they match the structure is free for an update. The first marker is scrambled and written to something new. The update proceeds on a copy and once its done the markers are rechecked to see if they are still scrambled as the thread choose. If both markers are cleared then nothing else changed them so then the update writes back the data copy and then updates the secondary marker to match the first. If the compare fails then the updated copy is tossed and the process starts again.

Lockless designs tend to be big cpu and memory wasters when they get into the lockless spins or are constantly clashing over updating shared structures. As a result the system running them should be balanced, tuned and as large as possible so that there is always data ready to be processed in each thread with as few conflicts as possible. You can get a taste of how cpu intensive they are by running the following example and watching your CPU hit the roof.

//complie with 
//g++ -std=c++11 lockless_queue.cpp -o lockless_queue.exe
#include <stdint.h>
#include <thread>
#include <iostream>

class LocklessQueueSys
{
  enum { size=1000};

public:
  LocklessQueueSys() :
    head(0),
    tail(0)
  {}
  
  void producer()
  {
    uint32_t count= 0;
    while(1)
      {
 while(((head+1)%size)== tail); //spin lock
 msg[head] = count++;
 head = (head + 1) % size;
      }
  }
  
  void consumer()
  {
    uint32_t expect=0;
    while(1)
      {
 while(head == tail); //spin lock
 if(expect != msg[tail])
    std::cout << "Error:" << expect  <<  "\n";
 if(expect%10000000 == 0)
    std::cout << "check:" << expect  <<  "\n";
 expect=msg[tail]+1;
 tail = (tail + 1) % size;
      }
  }
private:
  int32_t msg[size];
  int32_t head;
  int32_t  tail;
};

int main()
{
  //Use a member function in a thread
  LocklessQueueSys x;
  std::thread tpro(&LocklessQueueSys::producer, &x);
  std::thread tcon(&LocklessQueueSys::consumer, &x);
  
  tpro.join();
  tcon.join(); 
}

Sunday, May 27, 2012

C++11 Delegating Constructors.

Another much needed improvement to c++ is the problem of code reuse in constructors. Often you where forced to create an "init" function and call that from the body of the constructor. What this means is that you are effectively default constructing the member variables of the object and then re-initializing them to setup them uin p the common "init" function.

The new standard fixes this by allowing Delegating Constructors. Basically one constructor can now call another one from the same class in its place.

Heres an example:
//compile with  g++ -std=c++11 $< -o $@
#include <iostream>

class DelgateConstructor
{
  std::string str_;

public:
  DelgateConstructor(char* s) : 
    str_(s)
  {
    std::cout << "Working...\n";
  } 

  DelgateConstructor() :
    DelgateConstructor("Here we are") 
  {
    std::cout << str_ << "\n";
  }
};

int main()
{
  DelgateConstructor whatever;
}
This results in this output:
Working...
Here we are

Saturday, May 5, 2012

fixing a lost ubuntu unity dash

Gahh... blue screen of death was a blessing.. unix systems really can get themselves in all kinds of crazy twists. I have been trying out that various media center software for ubuntu lately... Its bad worst and just plain ugly.. Mythtv, freevo ... nothing just works nicely they have trouble doing the basic of playing an avi from a hard disk, or booting up with out destroying the monitor settings if the TV is powered down... out of all of it the one that works the best is vlc... surprise surprise.. Im on the vague of setting up a web server hacking some php together that talks to a telnet interfaced vlc deamon that boots up at start up... and calling it a day...

Anyway back to the point.. playing with this stuff tends to brick your box real quick. basically the lot of them are too invasive and over bloated with addtional power features while neglecting the basics... On several of my trials i lost the unity dash and menu-bars here is how i was restoring it.

Punch ctrl+alt+f1 if you cant get a terminal(might be safer to do it here anyway) then run
unity --reset
sudo restart lightdm
Punch ctrl+alt+f7 to jump back to the GUI interface and watch/check the restart..

May take 1 or 2 tries and its a little slow and sometimes get stuck doing the --reset.

Friday, May 4, 2012

c++11 lambda

c++11 lambdas

Lambdas are an excellent and long awaited addition to the c++ language. Lambdas where simply not possible before but the Boost lib offered a rather passable set of macros that let you get away in-lining a kin of lambda. (basically it was a series of functional objects)

The lambda syntax however makes my skin itch a bit. It is yea another reuse of the bracket characters - [](){} - and one that is potentially lowering the readability of the code and introducing coding errors as a result. I would have preferred something a bit more clear and obvious.

Firstly the basic lambda syntax is [capture](parameters)->return-type {body}. This is often short handed the most basic form when people blog and type about it which is []() { .... }


The first [] pair forms a "capture". The capture is run one time at and in the context of the lambdas creation(ie the point where you put the code). There several convenient shortcuts that grab the various variables from the current context automatically. Consider these to be members (and constructor params) of the resulting lambda object.

The next () pair forms the "parameters" these are the same as the parameters of a function definition. This resembles the operator(...) function call of a functional object or the parameter list of a function pointer. Note that these ARE surprisingly optional, but generally they are not left out.

The difference between capture and parameters is rather subtle (or obvious depending on your point of view or experience with it to date). It can at can seem like overkill or just outright fluff when you are constantly using lambdas in the local context. However the key point to realize is that if you pass the resulting lambda out of the current context without executing it then the "capture" has already completed for the context that you made it in and passed it out of. As a result the parameters in the capture must still be IN CONTEXT at the point of the Lambdas actual execution (or the capture was made by value). Parameters on the other hand are filled in when the Lambda is actually executed.

The -> forms the return type. It is optional and often left out. However it can become quite important when templates are used with lambdas otherwise the compiler can get terribly confused with which of the specializations are supposed to be put in place. I assume this is just be a problem with the maturity level of the current gen of c++11 compilers.

And of course the {} forms the body of the code. The same scoping rules apply here as if you coded the object as a completely separate member function of a functional object. So keep in mind that you can not reach outside to variables that where not passed in through the capture or params.

Ok an example: The little known summing of numbers.. zzZZZ.. ha wadaimiss...


// comile with 
//  g++  -std=c++11 lambda.cpp

#include <iostream>
#include <algorithm>
#include <vector>
#include <functional>

void examplePassByValue(int* start, int* end)
{
  int stuff[]={1,2,3,4,5,6,7,8,9};
  int sum = 0; 
  std::for_each(stuff, stuff+(sizeof(stuff)/sizeof(int)), [sum] (int v) 
    {
      std::cout << v << "\n";
    // This is passed by const value (os its BAD.. if you try to modify it.. )
    //sum += v;
    });
}

void examplePassByRef(int* start, int* end)
{
  int sum = 0; 
  std::for_each(start, end, [&sum] (int v )
  {
    std::cout << v << "\n";
    sum += v;
  });
  std::cout << " sum is : "<< sum << "\n";
}

void examplePassByRefImplicatScoped(int* start, int* end)
{
  return;

  // pass by ref of the entire implicat scope
  int sum = 0; 

  std::for_each(start, end, [&] (int v) 
  {
    std::cout << v << "\n";
    sum += v;
  });
  std::cout << " sum is : "<< sum << "\n";
}

// and now the fun...
template <class T>
void filterTo(T* start,
       T* end,
       std::function<bool (const T&)> filter,
       std::function<void (const T&)> action
       )
{ 
  std::for_each(start, end, [=] (T v) { if ( filter( v ) ) action( v ); });
}

void exampleStdFunctionLambdaNoClosure(int* start, int* end)
{
  //lambda compiling to a function pointer compatible with std:function
  std::function<bool (const int&)> filter
    = [](const int& v)->bool { return v%2; };
  std::function<void (const int&)> action
    = [](const int& v)->void { std::cout << "filt odd: " << v << "\n"; };

  filterTo(start, end,
    filter, action);
}

void exampleLambdaNoFunctionTemplateWeakness(int* start, int* end)
{
//  //This is the same as above and should work however the curent 
//  // compiler is  very poor at solving this.. so it dies
//  filterTo(start, end,
//    [](const int& v)->bool { return v%1; },
//    [](const int& v)->void { std::cout << v << "\n"; });
}

void exampleStdFunctionLambdaClosure(int* start, int* end)
{
  //evil: lambda closure is creates a class object not a function pointer
  // this can still wrap up in the std::function 
  int sum = 0;
  std::function<bool (const int&)> filter
    = [](const int& v)->bool { return v%2; };
  std::function<void (const int&)> action
    = [&sum](const int& v)->void {  sum += v; std::cout << "run sum:" << sum << "\n"; };

  //lambda compiling to a function pointer compatible with std:function
  filterTo(start, end,
    filter, action);

  std::cout << "Tot sum:" << sum << std::endl;
}

int main()
{
  int stuff[]={1,2,3,4,5,6,7,8,9};
  int* end = stuff + (sizeof(stuff)/sizeof(int)); //warning this is pointer arithmetic
 
  //suming examples
  examplePassByValue(stuff, end);
  examplePassByRef(stuff, end);
  examplePassByRefImplicatScoped(stuff, end);

  //find all odds..
  exampleStdFunctionLambdaNoClosure(stuff, end);
  exampleStdFunctionLambdaClosure(stuff, end);
  
  return 0;
}

Since this is a bit of a mess to run here is the make to build and run it (set BASE) to match your mingw 4.7 install path
BASE=/c/tools/mingw47
export PATH:=${BASE}/i686-w64-mingw32/lib:${BASE}/bin:${PATH}

all: run

lambda.exe: lambda.cpp
 g++ -std=c++11 lambda.cpp -o lambda.exe

run: lambda.exe
 lambda.exe



Refer: http://www.cprogramming.com/c++11/c++11-lambda-closures.html

Saturday, April 28, 2012

c++11 - mingw 4.7 install and msys setup notes

The new c++11 standard adds lots of new syntactic sugar to play with.

http://gcc.gnu.org/gcc-4.7/cxx0x_status.html


Unfortunately you'll need to get the 4.7 release directly... the automatic installer still wont download then for you.

http://code.google.com/p/mingw-builds/downloads/detail?name=i686-mingw32-gcc-4.7.0-release-c%2Cc%2B%2B%2Cfortran-sjlj.zip&can=2&q=


Once down loaded unzip in to your desired location (mine is c:\tools\...)

Then you will need to help the system locate the compiler, linker and libs (runtime dlls) for usage as well so you should run the following on a prompt before running your compliation and built .exes.

export PATH="/c/tools/tmp2/mingw/i686-w64-mingw32/lib:$PATH"
export PATH="/c/tools/tmp2/mingw/bin:$PATH"



boost 1.49 in windows 7 using mingw

Before you start. This is for(although its has worked on other systems)

mingw gcc version 4.6.2
boost verison: 1.49
OS: Windows 7 home.

This is basically the same as before in my prior post /2011/02/boost-in-vista-using-mingw-and-cmdexe.html

First get the newer version of boost from here http://sourceforge.net/projects/boost/files/boost/1.49.0/boost_1_49_0.zip/download?use_mirror=jaist


Dont bother downloading one of the ones with a build version of bjam it wont work. You will need to build it.

Setup gcc as in one of my prior posts.

Make certain that gcc is available on cmd.exe by running a fresh cmd.exe and executing:
gcc -v

You must double that this is not just a temp change to the %PATH% env variable by some script. It has to be set from windows GUI control directly to work reliably.
If gcc failed you can add it to the PATH with the following command sequence.
  • windows key+e
  • select "my computer"
  • right click it and select "properties"
  • 3rd tab -> click buttom "variables" button
  • add (or edit the existing) PATH entry and set its value [installed_dir]/mingw/bin;[installed_dir]/mingw/lib (where instal_dir is the pathto your mingw install
Next Build bjam: For help refer to: building bjam for 1.49. Note that I use the directory c:\tools as my install area for all programs that need to avoid the windows UAE etc idiocy.. Unziped the files into the desired location Then build the bjam.exe in cmd.exe by executing :
cd C:\tools\boost_1_49_0\tools\build\v2\engine
build.bat mingw
Once built copy C:\tools\boost_1_49_0\tools\build\v2\engine\bin.ntx86\b*.exe into C:\tools\MinGW\bin (This isnt needed but makes it easy later, since you likely have it in your %PATH% already.) Next build the boost libs also in cmd.exe by excuting:
cd C:\tools\boost_1_49_0
bjam toolset=gcc --build-type=complete stage
Refer: http://www.boost.org/doc/libs/1_49_0/more/getting_started/unix-variants.html Wait for the build system to grind it out. This time around there are very few build problems. I guess alot of things have been fixed since the 1.47 version. You should then build a few boost test programs(in cmd or msys) with:
g++ -I"c:\tools\boost_1_49_0" -L"c:\tools\boost_1_49_0\stage\lib" -static boost_lamba_test.cpp -o a.exe
g++ -I"c:\tools\boost_1_49_0" -L"c:\tools\boost_1_49_0\stage\lib" -static boost_regex_test.cpp -lboost_regex-mgw46-1_49 -o b.exe

The test programs are from here:

Lamba test: http://www.boost.org/doc/libs/1_45_0/more/getting_started/windows.html#build-a-simple-program-using-boost

Regex test: http://www.boost.org/doc/libs/1_45_0/more/getting_started/windows.html#link-your-program-to-a-boost-library

Keep in mind the order of the source and libs files is important in mingw http://www.mingw.org/wiki/Specify_the_libraries_for_the_linker_to_use

mingw msys install with the new installer

Got my self a new windows 7 machine as my work machine. Got set it up now.

So Msys and mingw have to be installed, and they have a new installer. SO first lets get the installer.. I have the 0.5 beta from here:
http://sourceforge.net/projects/mingw/files/Installer/mingw-get/

Dont really know what they where thinking with this. Its an apt-get command line imitation. The old plain and simple nullsoft installer with its simple stepped choices of install location the most common packages could get the job done no sweet.

Now you must have a net connection or take the trouble to pre-download all the crud on a different machine with a connection a head of time and transport all that to the offline machine.. its much more messy.

To top it off the default run of the program is just plain ridiculous. Open up the cmd.exe and run:
mingw-get.exe

You get a "cant do a gui install prompt, OK?" in a >>GUI<< prompt... despite the fact that i just ran it from the cmd line...and then it closes without doing anything else.. WTF! Correct this too this command:
mingw-get.exe --help

Then you get the basic idea. When you install it, it appears to installed to ../ dir from the current on so be careful where you ran it:
mingw-get.exe update
mingw-get.exe install mingw
mingw-get.exe install g++
mingw-get.exe install msys

gcc.exe -v
g++.exe -v

This resulted in version 4.6.2 of gcc and g++ and version 1.0 of msys.

My install directory is "c:\tools\mingw" And the msys boot script installs to [install_dir]/msys/1.0 so for me the full path of it is:
c:\tools\mingw\msys\1.0\msys.bat

Once I booted up msys I noted a problem with the /mingw directory.. simply put it was missing so to get all the tools i did a final ln -s as such:
cd /
ln -s /c/tools/mingw mingw

And then everything in msys seemed to be alive and in the default path.

Sunday, April 15, 2012

Ubuntu 11.10 -- samba setup

Ok now that we have all the disk setup we need to setup samba shares that so they drives are available on the network

sudo apt-get install samba
sudo apt-get install libpam-smbpass

Then edit and update the settings
sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.20120415
sudo vi /etc/samba/smb.conf

workgroup = WORKGROUP
   security = user

Then at the bottom add in your shares
[share200GB]
    comment = 200GB Share
    path = /media/biggy
    browsable = yes
    guest ok = no
    read only = no
    create mask = 0755

Keep in mind that if you give "guest ok = yes" then anyone can get in to the disk and read write etc...

Then reboot the samba server
sudo restart smbd
sudo restart nmbd

In the end I also had add an new user and force that user to be enabled as well with:
sudo useradd --home /home/remote --shell /bin/false remote      
sudo passwd remote
sudo smbpasswd -e remote

https://help.ubuntu.com/11.04/serverguide/C/samba-fileserver.html

ubuntu 11.10 -- wake on lan

After you setup the BIOS to allow wake up get the ethtool to setup the wake on lan;
sudo apt-get install ethtool

sudo ethtool eth0

Look for the Wake On settings then select one. The most common is "g"

sudo ethtool -s eth1 wol g

shutdown the machine and check that it can be woken from a wake up tool like "wakeonlan"

ubuntu 11.10 -- fixing missing mounted disks.

I have no clue why but if you put the desktop version of ubuntu in it insists on dynamically mounting and unmounting the disks as the user needs them. I want them shared in samba so I need to fix them disk mounting.

First discover all the disks that your system has with:
sudo fdisk -l
sudo blkid

It is best to use the blkid of the device this creates a more robust setup and allows for the disk to move around depending of scsi and apti booting.

One of these will be your system disk... DO NOT mess with this disk it its a sure fire way to brick the machine. The basic install partitions look like this:
Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048    76068863    38033408   83  Linux
/dev/sdb2        76070910    78163967     1046529    5  Extended
/dev/sdb5        76070912    78163967     1046528   82  Linux swap / Solaris

Next rather that mess around with fstab and guess the whole thing fro scratch im going to use a GUI tool to mount the disks most of the way and clean up manually.

So to be safe we are backing up fstab first. if you brick it then boot from your install disk and copy the backup back over the top
sudo cp /etc/fstab /etc/fstab.20120412

Open the "Dash Home" (upper left) and search for "Storage Device Manager"..
Note that If you get nothing click the "More Apps" and then try the search again (the good the bad and then unity...) click it youll end up in the software center install it(the button is one the mid right). Then repeat the above step.

Now Storage Device Manager isn't bug free, the newer hard disks tend to confuse it. click each disk and confirm that the /dev/.. in the lower left corner matches to the tree layout if it does and it prompts you that your device is not mounted then go ahead and auto mount it.. it the text and display mismatch dont mess with it you can brick the machine. Also confirm that your not messing the the main system disk.

For example on my machine it confused sda1 with /dev/sbb5 ..

So once you run the tool or manually add the basic disk step you will want to mount each disk and confirm the they work. Then reboot and confirm that boot still works. Once your machine boots by that we can think about fixing permissions etc.

This page has an excellent listing of the basic options types to use to make the disk work well. https://help.ubuntu.com/community/AutomaticallyMountPartitions#Systemwide_Mounts


Ubuntu 11.10 -- fixing grub

Gezz can they screw the system up even worst.. now we have a graphical grub that is not 100% compatible with all monitors. To disable this mess.

 GRUB_TERMINAL=console
 GRUB_CMDLINE_LINUX_DEFAULT="text" 

Then run:
 update-grub

And reboot. Now we should have text and problems on boot will clearly visible.... amazing!

Ubuntu 11.10 -- renaming the machine

God knows why but the install appends a mess of crud to the end of the machine name to fix this in a GUI) edit the hostname and replay it with something saner

sudo vi /etc/hostname

Ubuntu 11.10 -- VNC screen not updating fixes


This time around im combining all my various servers into one. And i had the brilliant idea to update to Ubuntu 11.10 in the progress. This was a massive mistake. Unity is buggy as hell

First step setup VNC. This is simple (or should be)

1. First create a user that autologs in,
2. in that user hit the "Dash home" in the upper left of the unity menu.
3. Enter "desk" in the search box and find the "Desktop sharing" app and open it
4. check the "Allow other users to view", "Allow other users to control"
5. uncheck the "You must confirm..."
6. check the "Require the user to enter this password" and punch in a password
7. save it

Now that dead simple... but it doesn't work! The key issue is the stupid eye candy. So log out(from the upper right cog menu) and get to user login menu.

On select automatic login users account and click the cog select "unity 2D" to disable the pointless eye candy and then login. Now VNC will connect and the the screen will update.

On a side not you might what to disable more the crud like backgrounds etc. so that the connection is faster and lighter.


Sunday, January 1, 2012

Code Enablement control

I recently re-encountered an interesting design problem: The requirement to roll out code with the ability to quickly and globally disable it at run time.

System like Googles search engine are 100% online, they use the idea of machines sets and stages where they roll out new versions the code, boot up the system and then enable/disable parts of the system as they trial the new algorithms.

So how can this be done? Well there are many approaches but the probably break down to 2 main categories.

  • Enablement Switches
  • Runtime replaceable code parts.
An Enable Switch is a boolean or value test that filters the entry into specific region(s) of the code
  • Advantage:
    1. micro level Control
  • Disadvantage:
    1. Generally this requires the constant check of a bool in hash or shared mem.
    2. Code is statical linked so if its wrong you will need to replace the whole processes code
Runtime replaceable code can be anything from whole processes, dynamic libs or so other plugin system
  • Advantage:
    1. Dynamically able to add/replace libs at runtime when things go wrong or a clear tweak is visable
  • Disadvantage:
    1. It is Marco level. Ie requires more larger chunks of code to be loaded/unload
    2. cant handle extensive interface changes over the plugin interface.
Ideally the solution is probably a mix of the above. So the bit Enablement system needs to be dynamically able to expand and register new access keys as needed. This also accounts for process variations over heterogeneous machines and dynamically replaced code. Basically the following solution is a shared memory hash implementation that is able to be accessed and loaded from independent processes. Be careful of it as I have been a bit lazy with its mutex locking and it will probably have some concurrency issues. Also keep in mind its a hash so its more efficient to over allocate the hash size vs the actually number of used keys. As always there are some other problems with the code but you can work them out for an actual production implementation. Anywhere here is my hacked up prototype of the system.
// compile with:
// g++ -I"c:\tools\boost_1_45_0" -L"c:\tools\boost_1_45_0\stage\lib" -static enable_control_flag.cpp -o enable_control_flag.exe

//#include <boost/interprocess/shared_memory_object.hpp>
#include <boost/interprocess/windows_shared_memory.hpp>
#include <boost/interprocess/mapped_region.hpp>
//#include <boost/thread/thread.hpp>
#include <boost/functional/hash.hpp>

#include <iostream>
#include <iomanip>
#include <string>

// for windows Sleep!
#include <Windows.h>

#define KEY_MAX 64

struct EnableControlSwitch
{
  //realy this is just for a chuck of shared memory
  EnableControlSwitch() :
    state_(false),
    hash_(0)    
  {}

  uint32_t      hash_;
  bool          state_;
  char          key_[KEY_MAX];
  //expiry date...
  //creator process... so u can trak where its coming from.
};

using namespace boost::interprocess;
class EnableControl
{
public:
  enum { 
    MAX_ENTRIES = 256,   //best to keep this as a power of 2 for speed
    MEM_SIZE     = (MAX_ENTRIES*sizeof(EnableControlSwitch))
  };
  
  static EnableControl& instance() 
  { 
    static EnableControl me; 
    return me;
  }

  EnableControlSwitch& create(const std::string& key)  { return get(key, true); }
  EnableControlSwitch& get(const std::string& key,
      bool create = false)
  {
    //locate or register
    if(key.length() > KEY_MAX-1) throw std::runtime_error("Key to big");

    uint32_t hash = boost::hash_value(key);
    uint32_t loc  = hash;
    
    while (hash != 0)
      {
 uint32_t idx = loc % MAX_ENTRIES; 
 if(switches_[idx].hash_ == 0)
   {
     if (!create)
       throw std::runtime_error("Unknown Key");
     
     //free location!
     //lockless version... assume atomic...
     switches_[idx].hash_ = hash;
     if(switches_[idx].hash_ == hash)
       {
  //proceed
  std::memcpy(switches_[idx].key_, 
       key.c_str(), 
       key.length());
  switches_[idx].key_[key.length()] = '\0'; 
  return switches_[idx];
       }
   }
 else if(switches_[idx].hash_ == hash)
   {
     return switches_[idx];
   }
 else
   {
     //nothing went wrong and it was occupied.. next possible place
     loc = loc / MAX_ENTRIES;
   }
      }
    
    throw std::runtime_error("To many key conflicts");
  }

  bool& state(const std::string& key)
  {
    EnableControlSwitch& aSwitch = get(key);
    return aSwitch.state_;
  }

  void enable(const std::string key)  { state(key) = true; }
  void disable(const std::string key) { state(key) = false; }

  std::ostream& printAll(std::ostream& out) const
  {
    for(uint32_t idx = 0; idx < MAX_ENTRIES; idx++)
      {
 if(switches_[idx].hash_ != 0)
   out << std::hex
       << "Idx:"    << idx
       << " Hash:"  << switches_[idx].hash_
       << " Key:"   << switches_[idx].key_
       << " State:" << switches_[idx].state_
       << "\n";
      }
    return out;
  }
  
private:
  EnableControl() :
    shm_(NULL),
    region_(NULL)
  {
    //setup shared mem
    shm_ = new windows_shared_memory(open_or_create, "SharedEnableControls", read_write, MEM_SIZE);
    region_ = new mapped_region(*shm_, read_write);
    std::memset(region_->get_address(), 0, region_->get_size());
    switches_ = static_cast<EnableControlSwitch*>(region_->get_address());
  }

  ~EnableControl()
  {
    delete region_;
    delete shm_;
  }

  windows_shared_memory* shm_;
  mapped_region*         region_;
  EnableControlSwitch*   switches_;
};

std::ostream& operator<<(std::ostream& out, EnableControl const& control)
{
  return control.printAll (out);
}

void master()
{
  //should use an allocator... but lets keep it simple for now...
  EnableControl& ctrl = EnableControl::instance();

  const EnableControlSwitch& a = ctrl.create("a"); //high speed Enable point (share mem ref copy)
  const EnableControlSwitch& b = ctrl.create("b"); 
  const EnableControlSwitch& c = ctrl.create("c"); 
  const EnableControlSwitch& exit = ctrl.create("exit"); 

  while (!exit.state_)
    {
      try 
 {
   std::cout << EnableControl::instance();
      
   std::cout << "sleeping..\n";
   //boost::this_thread::sleep(boost::posix_time::seconds(1));
   Sleep(1000);

   if(a.state_)
       std::cout << "a\n";
   if(b.state_)
       std::cout << "b\n";
   if(c.state_)
       std::cout << "c\n";
 }
      catch(std::exception& e)
 {
   std::cout << e.what();
 }
    }    
}

void slave(std::string cmd, std::string key)
{
  try
    {
      if(cmd == "status")
 std::cout << EnableControl::instance();
      else if(cmd == "enable")
 EnableControl::instance().enable(key);
      else if(cmd == "disable")
 EnableControl::instance().disable(key);
      else if(cmd == "info")
 std::cout << "Key:" << key << " is " << (EnableControl::instance().state(key) ? "ON" : "OFF") << "\n";
      else
 std::cout << "Unknown Command:" << cmd << " Key:" << key << "\n";
    }
  catch(std::exception& e)
    {
      std::cout << "Error: " << e.what() << "\n";
    }
}

int main(int argc, char const * const *argv)
{
  //this is a 2 process system 
  // master is run with no parameters
  // slave is one of the aboved listed commands + the option key.

  if (argc == 1)
    master();
  else if (argc == 2)
    slave(argv[1],"");
  else
    slave(argv[1],argv[2]);
}
Some improvements the come to mind are:
  1. At compile time, compute the the string hash using boost::mpl::strings, to squeeze some extra mill secs in at the start up. but that is serious over kill IMHO. Better to improve the hash collision algorithm.
  2. Add the ability to delete hash entries.
  3. Add the ability to add pages for switches so that the compile time switch limit can be exceeded.
  4. A way to cleanly initialization the switch system.