// http://www.impactinterview.com/2009/10/140-google-interview-questions/#software_engineer
// Write a C program which measures the the speed of a context switch on a
// UNIX/Linux system.
// wow.. just wow... the questioner hates you.. so moving on...
// A context switch occurs when the kernel or hardware thinks its time to do
// something else... you have no control of it.. *period*... the only true
// way to measure it is to get into the kernel remove *everything* and check it
// directly.. without running any other programs and device drivers etc, i
// mean hell painting the screen, that blinking cursor your looking at while
// typing thats *all* a context switch.
// so write code to measure this.. well.. i dont know the kernel so thats out..
// so i dont have the skills to do the *real* job.. the best i can do is guess..
// basically i can measure the whole process: program executing, ctx switch, do
// something else, ctx switch, complete execution
// so we are going to try to measure something that should have very little
// variation and should scale well.. ie multiply and accumulate a register.
// we will do the ops few times and profile them. Then change the total number
// of times we do it and reprofile.. of course smaller context switches may
// be getting buried in the testing method etc.. not to mention the timers
// effects etc etc
// what we are looking for is a statistical blip... the mult accum should just
// shift right when we scale it.. anything that stays still isnt part of our
// test.. its likely to be some part of the kernels context switching or device
// drivers
// Anyway results looked as expected.. the test execution time moved to the
// right when the number of cycles was increased.. there is a set of a large
// reoccurring spikes in the execution times.. but i doubt these are the cost
// of a context switch, more likely some device driver, service, interrupt
// handlers bumping into my execution etc
#include <iostream>
#include <chrono>
#include <map>
int main()
{
std::map<int,int> counts;
for (int k = 0; k < 100; ++k)
{
std::cout << ".";
for (int j = 0; j < 1000000; ++j)
{
auto start = std::chrono::system_clock::now();
for (int i = 0; i < 100000; ++i)
{
i += i*i;
}
auto end = std::chrono::system_clock::now();
int delta = std::chrono::duration_cast<std::chrono::nanoseconds>(end - start).count();
// sigh.. crud but will work for now
if (counts.find(delta) != counts.end())
++(counts[delta]);
else
counts[delta] = 1;
}
}
std::cout << "\n";
for (std::map<int,int>::iterator cit = counts.begin();
cit != counts.end();
++cit)
{
std::cout << " " << cit->first << ":" << cit->second << "\n";
}
}
The results copied out into open office and graphed etc. Note that the "fuzz" in the red section maybe significant also.. but im not really a maths guy so I cant be certain, and my use of stats here is really flimsy so grain of salt people... Update: i just noticed that i didnt align the graphs very well(this wasn't intentional) anyway the mistake is a bit disadvantageous to the results so ill leave it be.

No comments:
Post a Comment