Saturday, March 18, 2017

Setting up basic Machine Learning rig - Ubuntu

For ubuntu
https://www.tensorflow.org/install/install_linux

First install python and pip

sudo apt-get install python python-pip python-dev
sudo pip install --upgrade pip virtualenv 

--- WITH A GPU ---

Install the GPU if not already done as described here:
https://www.pugetsystems.com/labs/hpc/NVIDIA-CUDA-with-Ubuntu-16-04-beta-on-a-laptop-if-you-just-cannot-wait-775/

Verify that you even have a usable gpu with (it must be one compatiable with cuda
lspci | grep -i nvidia

Install Cuda drivers

Remove prior installs (if you have a problem with it)
sudo apt-get purge nvidia-cuda* 
sudo apt-get install cuda

download the recent cuda drivers from
https://developer.nvidia.com/cuda-downloads

install the drivers
chmod 755 cuda_7.5.18_linux.run
sudo ./cuda_7.5.18_linux.run --override

Confirm setup
which nvcc 
nvcc --version
nvidia-smi

Output should be something like
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Tue_Aug_11_14:27:32_CDT_2015
Cuda compilation tools, release 7.5, V7.5.17


Sat Mar 18 14:16:58 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57                 Driver Version: 367.57                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 970M    Off  | 0000:01:00.0     Off |                  N/A |
| N/A   55C    P0    22W /  N/A |    586MiB /  3016MiB |      8%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      1144    G   /usr/lib/xorg/Xorg                             366MiB |
|    0      1922    G   compiz                                         111MiB |
|    0      2302    G   ...bled/ExtensionDeveloperModeWarning/Defaul   107MiB |
+-----------------------------------------------------------------------------+


Install cudnn drivers
http://askubuntu.com/questions/767269/how-can-i-install-cudnn-on-ubuntu-16-04

Download the drivers
https://developer.nvidia.com/cudnn


Locate where your cuda installation is. it is /usr/lib/... and /usr/include or /urs/local/cuda/.

which nvcc 
ldconfig -p | grep cuda

Step 3: Copy the files:
cd extracted_driver/
sudo cp -P include/cudnn.h /usr/include
sudo cp -P lib64/libcudnn* /usr/lib/x86_64-linux-gnu/
sudo chmod a+r /usr/lib/x86_64-linux-gnu/libcudnn*

Confirm setup
ldconfig -p | grep cudnn

should be something like:
libcudnn.so.5 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcudnn.so.5
libcudnn.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcudnn.so

--- INSTALL KEY TOOLS ---

Install Machine learning essentials

sudo pip install numpy
sudo pip install pandas
sudo pip install scikit-learn
sudo pip install jupyter
sudo pip install xgboost


Now you can install tensorflow with GPU as follows
sudo pip install tensorflow-gpu
sudo pip install keras    

Or without:
sudo pip install tensorflow
sudo pip install keras    

Monday, December 12, 2016

byzantine agent cell war - ML agent war game

Target


Conjecture


  • Conscious is formed by the processes of abstraction when applied to self modeling and self awareness

Objective


  • to attempt an attack the problem of self awareness and self modeling
  • to force an ML agent to be aware of it own presence in the neighboring cells.

The Game


Game Board


The core items


  • 2 learning agents: red and blue
  • a 2d grid of cells


The grid of cells


  • each cell has 3 states: red, blue, ownerless
    • the agent red or blue can read and act and learn from its owned cells
  • each cell has 4 comms channels (in the directions north, south east and west)
  • each cell has one memory channel that can be written/read
  • the grid is circular
    • the comms channel of the east most cell talks to the west most cell (and visa versa)
    • the comms channel of the north most cell talks the south most cell (and visa versa)


The agent


  • each agent is one global entity
  • each cell owned by an agent runs its own instance of the agent individually and independent of all other cells
  • each agent is installed in a cell that it owns and can only act on information for that *specific* cell its working with
    • there is *NO* back door communication inside the agent between cells
    • the agent acts like a CNN processing unit and is isolated from the neighboring cells even if it owns them


The play sequence


Game initialization


  • cells are randomly assigned one of the 3 states of ownership
  • memory is randomly filled


The round of play


  1. randomize phase
    • NOTE this is to break the potential unique identification of a cell by memory contents that creates a data leak problem allowing the network to hard bake in a unique action per cell and never generalize it
    • randomly select some percent of cells and randomize the ownership and memory values
    • percent of randomization can vary over time and might ramp up/down if game is in stalemate or single agent comes to dominate the game
  2. "question" phase
    • Agent reads "memory" and outputs 4 "questions" one for each direction
    • the cells send the "question" to each of the four directions
  3. "answer" phase
    • agent reads in the 4 incoming "questions" and "memory" and decides "answers"
    • the cells then send an "answer" to each of the four directions
  4. action phase
    • given "memory", "questions" and "answers" agent decides "action"
    • the agent then chooses 1 action to attack or defend and its direction of action
    • the agent chooses what to write to memory
  5. update phase
    • cell updates memory
    • env computes action outcome and updates cell ownership status
  6. learning phase
    • agents are informed of the question, answer, attack, defend, memory and ownership of each cell they *owned* at the round start
    • agents are told the score: how many cells they *now own* after the round
    • agents can then perform learning with unlimited time before signaling ready for next round

updating ownership of cell


  • battle is decided on the simple formula of defend - attack
  • current cell owner counts as +1 defend for that team
  • if the result is net positive ownership is unchanged
  • if the attack result is less than 0 the enemy takes cell ownership.
  • ownership of cell is updated to ownerless if net attack count was 0 .


Game Notes


The comms


  • these are a vector of numbers (ie as in word2vec vector)
  • size of 4 seems ok??

The memory


  • acts like a 5th comms channel
  • same size as a single comms channel

Ownerless cells


  • generate random responses and actions

Experimental notes and Expectations


Issues with the agents


  • the agent has no global communication method
  • the agent has no direct way to know what its position in the grid is
  • HOWEVER there are some massive *buts* and tweaking is expected
    • ISSUE: the contents of the memory might form a unique identification signal and combined with the comms channels this can result in a data leak that exposes location of the cell and its relationship to its neighbors in an indirectly computable anyway. The learning methodology could then bake such information into its neural net at each round and this will cause a hard coded response based on the magic id provided by the memory
    • COUNTER ACTION: Add an "randomize phase", at the start of each round that damages the memory and ownership of a small percent of cells. this forces the agent to consider all cells as potential byzantine failures, rather than explicitly and solo relying on the baked in knowledge of what the cell is and who owns it... hopefully this forces the agent to generalize better in the long term


Tools and methods in play


  • A byzantine environment.
  • Question/answer phases utilize adversarial techniques (ie Generator and Discriminator)
  • Utilize a min max game dynamics
  • Requires self inter communication, recovery from random events and handle long distance coordination


Exceptions of experiment


  • I expect the agents to learn to communicate with friendly neighbors
  • I expect to see indirect message relay using comms -> memory -> comms
  • I expect agents to be aware of which neighbor is an enemy and which is friend
    • ie attacks enemies only, supports friends only
  • deceptive comms, an enemy cell will attempt to mask its self as if its a friend
  • coordinated attack, neighboring cells may focus attack a target enemy to over power it
  • self supportive defense, agents work together to defend its own front line cells so that if attacked the front line cells defense will be boosted
  • testudo formation
    • in theory a cell with 2 friendly neighbors and 2 enemies can fully defend its self with support of its neighbors
    • ie the cell can defend to the east always, creating a +2 internal defense
    • also with supported from behind it an become a +3 defense(unkillable cell)
    • this will lead to a battle stalemate when 2 testudo walls push up to each other, at this point the agents need to break the formation using comms based infiltration... hopefully the randomization phase will allow sufficient *spying* for this to be possible


Wednesday, October 26, 2016

analytic regression in python using a parameterized family of functions

Im have been studying ML(machine learning). So here is the deal.. ML is just applied statics. You take a bunch of data and you fit a model to it. The data affords you a certain upper limit of accuracy compared to the real world. Your chosen model then subtracts a certain percentage off the top. The cost of abstraction is an imperfect fit. Simple.

So one of the primary rules of thumb when working with this stuff is start simple. You need to digest the dataset and then make incremental steps forward... its easy however to be making steps backwards without knowing it.

Now regression is just curve fitting. The simplest way to do this is just a direct analytic solution. no gradient descent, nothing more complex, but this approach has limits that quickly become clear when dealing with larger datasets.

So lets solve analytic regression for a parameterized family of functions and then codify it... you'll note i have some quirks to the way i do things.. i try to avoid matrix maths since i think it hides too much and fails to readily when you get beyond 2d axis of data

Define the model:
\hat y_i := \sum_k w_k f(x_i;k)

Define the loss:
MSE = \sum_i(\hat y_i - y_i)^2

Set target (when mse gradient is 0 the error is at its min for w of 0 to n):
\frac{\partial MSE}{\partial w} = 0

Sub the model into the target equation for each "n" derviate term:
= \frac{\partial}{\partial w_n} \Big( \sum_i( \sum_k w_k f(x_i;k) - y_i)^2 \Big)

Expand square (note not expanding the sum of k treat it as a single term):
= \frac{\partial}{\partial w_n} \Big( \sum_i\big( (\sum_k w_k f(x_i;k))^2 -2 y_i \sum_k w_k f(x_i;k) + y_i^2 \big) \Big)

Transfer outer sum to terms:
= \frac{\partial}{\partial w_n} \Big( \sum_i(\sum_k w_k f(x_i;k))^2 - \sum_i 2 y_i \sum_k w_k f(x_i;k) + \sum_iy_i^2 \Big)

Transfer derivative to terms and expand/rebase squares:
= \frac{\partial}{\partial w_n} \sum_i(\sum_k w_k f(x_i;k))(\sum_j w_j f(x_i;j)) - \frac{\partial}{\partial w_n} \sum_i \sum_k 2 y_i w_k f(x_i;k) + \frac{\partial}{\partial w_n} \sum_iy_i^2

Do simple derivatives to the 2 right most terms:
* Note that the partial derivative of a sum drops all terms unless n=k
* Note that sum of y's looks like a constant and the derivate of a constant is 0
= \sum_i\frac{\partial}{\partial w_n} (\sum_k w_k f(x_i;k))(\sum_j w_j f(x_i;j)) - \sum_i 2 y_i f(x_i;n)

Start applying the derivative product rule:
= \sum_i\sum_k w_k f(x_i;k) \frac{\partial}{\partial w_n}(\sum_j w_j f(x_j;j)) + \sum_i\sum_j w_j f(x_j;j) \frac{\partial}{\partial w_n}(\sum_k w_k f(x_i;k)) - \sum_i 2 y_i f(x_i;n)

Continue the derivative product rules:
= \sum_i\sum_k w_k f(x_i;k) f(x_i;n) + \sum_i\sum_j w_j f(x_i;j) f(x_i;n)) - \sum_i 2 y_i f(x_i;n)

Rebase vars into common ones:
= \sum_i\sum_j w_j f(x_i;j) f(x_i;n) + \sum_i\sum_j w_j f(x_i;j) f(x_i;n)) - \sum_i 2 y_i f(x_i;n)

Simply:
= 2 \sum_i\sum_j w_j f(x_i;j) f(x_i;n) - 2 \sum_i y_i f(x_i;n)

Equate for a linear solver implementation (recall there are now "n" of these equations):
\sum_i\sum_j w_j f(x_i;j) f(x_i;n) = \sum_i y_i f(x_i;n)

Now the above maths when codified produces the following code:

#!/usr/bin/env python

# analytic regression for a general linear combination of functions

import random
import numpy as np
import matplotlib.pyplot as plt

# convert the maths into a function
# note this is likely weak in respect to:
#  - overflow (because we are summing numbers)
#  - large amounts of data (no batching etc)
#   etc
def analytic_regress(x,y,params,func):
    size = len(params)
    left  = np.zeros((size,size))
    right = np.zeros(size)

    for i in range(size):
        right[i] = np.sum(y*func(x,params[i]))

        for j in range(size):
            left[i,j] = np.sum(func(x,params[i])*func(x,params[j]))

    w = np.linalg.solve(left, right)

    model = lambda x: (np.sum(w * np.array([func(x,i) for i in params])))
    return w, model

### testing ###

def generate(func,
             x_min, x_max,
             yerr,
             count):
    x = np.random.uniform(x_min, x_max, count)
    y = np.array([func(xi) for xi in x]) + np.random.uniform(0,yerr,count)

    return x,y

# basis functions to trial
def ploy(x,n):
    return x**n

def cossin(x,n):
    if (n < 0):
        return np.sin(-n * x)
    return np.cos(n * x)

def sigmoid(x,n):
    return 1.0/(1.0 + np.exp(n-x))

def guassian(x,n):
    return np.exp(-0.5*(x-n)**2)

def square(x,n,w):
    return (1.0*np.logical_and(x >= n, x < n+w))

# a general function to trial a few regressions
def test_set(func,
             x_min, x_max, y_error, count,
             basis, params,
             x_test):

    x,y = generate(func, x_min, x_max, y_error, count)

    models = {}
    for p in params:
        w, models[p]  = analytic_regress(x,y,range(p),basis)
        print "model basis up to:", p, " has w:"
        print w

    y_test = {}
    for p in params:
        model = models[p]
        y_test[p]  = np.array([model(i) for i in x_test])

    plt.plot(x , y , "o")
    for p in params:
        plt.plot(x_test, y_test[p])

    plt.show()

# a general funtion to trial a single regression
def test_one(func,
             x_min, x_max, y_error, count,
             basis, params,
             x_test):
    x,y = generate(func, x_min, x_max, y_error, count)

    w, model = analytic_regress(x,y, params, basis)
    print w

    y_test = np.array([model(i) for i in x_test])

    plt.plot(x , y , "o")
    plt.plot(x_test, y_test)
    plt.show()

print " ploy regress for function y = 78 - 2x"
test_set(lambda x: (78.0 - 2.0*x),
         0.0, 6.0, 6, 100,
         ploy, [2,3,5,9,11],
         np.array(range(-1, 70))/10.0)

print " ploy regress for function y = 78 + 60x - 13x^2"
test_set(lambda x: (78.0 + 60.0*x - 13.0*x**2),
         0.0, 6.0, 6, 100,
         ploy, [2,3,5,9,11],
         np.array(range(-2, 72))/10.0)

print "square regress for function y = 1 iff 1 < x < 6 othewise 0"
test_one(lambda x: (0 if (x < 1 or x > 6) else 1),
         0.0, 7.0, 0.4, 100,
         lambda x,n: (square(x,n,0.5)), np.array(range(14))/2.0,
         np.array(range(-50, 150))/10.0)

print "square regress for function y = 10*cos(2x)"
test_one(lambda x: 10*np.cos(2.0*x),
         0.0, 7.0, 0.4, 100,
         lambda x,n: (square(x,n,0.5)), np.array(range(14))/2.0,
         np.array(range(-50, 150))/10.0)

print "sigmod regress for function y = 10*cos(2x)"
test_one(lambda x: 10*np.cos(2.0*x),
         0.0, 7.0, 0.4, 100,
         sigmoid, np.array(range(14))/2.0,
         np.array(range(-50, 150))/10.0)

print "guassian regress for function y = 10*cos(2x)"
test_one(lambda x: 10*np.cos(2.0*x),
         0.0, 7.0, 0.4, 100,
         guassian, np.array(range(14))/2.0,
         np.array(range(-50, 150))/10.0)

print "cos/sin regress for function y = 10*cos(2x)"
test_one(lambda x: 10*np.cos(2.0*x),
         0.0, 7.0, 0.4, 100,
         cossin, np.array(range(-6,8))/2.0,
         np.array(range(-50, 150))/10.0)

And then the various outputs...

The regression using using a family of polynomials on a linear equation:

The regression using using a family of polynomials on a quadratic:

The regression using square bars on a square bar:

The regression using square bars on a cos:


The regression using guassians on a cos:

The regression using sigmoids on a cos:

The regression using sin and cos on a cos:

Find the n shortest paths from a to b

// compile with: g++ --std=c++11 multipath.cpp 
//
// Problem Statement:
//  find the n shortest paths from a to b

// Answer:
//  The proposer of the question seemed to think this was hard
//  and even stated many more limitations to the question(no cycles etc)..
//  but i took it as dead easy. Since this inst a challenge of efficiency
//  ill stop at the first pass naive recursive answer..

#include <utility>
#include <stdint.h>
#include <vector>
#include <map>
#include <memory>
#include <iostream>

struct Node
{
    uint32_t id_;
    std::vector<std::pair<Node*,uint32_t> > kids_;

    Node(uint32_t id) :
        id_(id)
    {}
};

struct Graph
{
    // meh should be better..
    std::map<uint32_t, std::shared_ptr<Node> > nodes_;

    Node* get(uint32_t id)
    {
        Node* n = nodes_[id].get();

        if (n == NULL)
        {
            nodes_[id] = std::make_shared<Node>(id);
            n = nodes_[id].get();
        }

        return n;
    }

    void link(uint32_t fromId,
              uint32_t toId,
              uint32_t weight)
    {
        Node* from = nodes_[fromId].get();
        if (from == NULL)
        {
            nodes_[fromId] = std::make_shared<Node>(fromId);
            from = nodes_[fromId].get();
        }

        Node* to = nodes_[toId].get();
        if (to == NULL)
        {
            nodes_[toId] = std::make_shared<Node>(toId);
            to = nodes_[toId].get();
        }

        from->kids_.emplace_back(std::make_pair(to,weight));
    }
};

struct PathNode
{
    std::shared_ptr<PathNode> parent_;
    Node*      node_;
    uint32_t   cost_;

    PathNode(std::shared_ptr<PathNode> parent,
             Node*      node,
             uint32_t   cost) :
        parent_(parent),
        node_(node),
        cost_(cost)
    {}
};

std::ostream& operator<<(std::ostream& os,
                         const PathNode& path)
{
    // inverted.. but it meets my needs
    if (path.parent_.get() != NULL)
    {
        const PathNode& p = *(path.parent_);
        os << p;
        os << "-";
    }

    if (path.node_ != NULL)
        os << path.node_->id_;

    return os;
}


// ************ NAIVE APPROACH ***********
// WARNING do be careful here.. there are some large weaknesses
// in the Naive version:
//  - negative weights will break the shortest first ordering
//  - cycles that trap the search in disconnected areas will never terminate
//
// A stronger algo would fix this by checking reachability from the node and
// pruning the impossible branches from the tree (in fact that would be my next
// step in optimization as it also saves cycles.. but im here for a different
// target so not going there.. a more simple fix would be to add a cycle
// limit and terminate on that..)

std::vector<std::shared_ptr<PathNode> > findPaths(Node*    from,
                                                  Node*    to,
                                                  uint32_t limit)
{
    std::vector<std::shared_ptr<PathNode> > results;
    auto comp = [] (const std::shared_ptr<PathNode>& a,
                    const std::shared_ptr<PathNode>& b) -> bool
        { return a->cost_ > b->cost_; };

    // I was using a std::priority_queue.. problem is i
    // cant print it to demonstrate the stack action to
    // the question proposer
    std::vector<std::shared_ptr<PathNode> > stack;

    // ok start the algo by adding the *from* node into a path.. and stacking it
    stack.push_back(std::make_shared<PathNode>(nullptr,
                                               from,
                                               0));

    while (results.size() < limit and not stack.empty())
    {
        // show the stack...
        // std::cout << "  stack: \n";
        // for (const std::shared_ptr<PathNode>& path : stack)
        //     std::cout << " -- c:" << path->cost_ << " p:" << *path << "\n";

        std::pop_heap(stack.begin(), stack.end(), comp);
        std::shared_ptr<PathNode> current = stack.back();
        stack.pop_back();

        // check if node was at the terminal
        if (current->node_ == to)
        {
            // register it as the next result
            results.push_back(current);

            // and end if we have our "limit" of solutions
            if (results.size() > limit)
                return results;
        }

        // and then expand search tree using the current node
        for (const std::pair<Node*,uint32_t>& edge : current->node_->kids_)
        {
            std::shared_ptr<PathNode> step =
                std::make_shared<PathNode>(current,
                                           edge.first,
                                           current->cost_ + edge.second);

            // now here is a tricky part.. the next shortest route may
            // include the current one even if it is a *terminal* (ie self loop)
            // hence we *always* push even if its the *end*!
            stack.push_back(step);
            std::push_heap(stack.begin(), stack.end(), comp);
        }
    }

    // ok we fell of the end of the stack that means there are not "limit" solutions
    return results;
}

void printPaths(std::ostream& os,
                const std::vector<std::shared_ptr<PathNode> >& paths)
{
    os << "Results...\n";
    for (const std::shared_ptr<PathNode>& path : paths)
    {
        os << "d:" << path->cost_ << " p:" << *path << "\n";
    }
}

void test()
{
    {
        // warning naive algo death - dont do this with a cycle it will never terminate
        Graph disconnected;

        Node* to   = disconnected.get(0);
        Node* from = disconnected.get(1);

        printPaths(std::cout, findPaths(from,to,3));
    }

    {
        // self
        Graph self;
        self.link(0,0,1);

        Node* zero = self.get(0);

        printPaths(std::cout, findPaths(zero,zero,3));
    }

    {
        // cycle
        Graph cycle;
        cycle.link(0,1,1);
        cycle.link(1,0,1);

        Node* from = cycle.get(0);
        Node* to   = cycle.get(1);

        printPaths(std::cout, findPaths(from,to,3));
    }

    {
        // a line of nodes ( asking for 3 anwers.. impossible)
        Graph oneLine;
        oneLine.link(0,1,1);
        oneLine.link(1,2,1);
        oneLine.link(2,3,1);

        Node* from = oneLine.get(0);
        Node* to   = oneLine.get(3);

        printPaths(std::cout, findPaths(from,to,3));
    }

    {
        // 2 lines of nodes ( asking for 3 anwers.. impossible)
        Graph twoLine;
        twoLine.link(0,1,1);
        twoLine.link(1,2,1);
        twoLine.link(2,3,1);
        twoLine.link(0,4,2);
        twoLine.link(4,3,2);

        Node* from = twoLine.get(0);
        Node* to   = twoLine.get(3);

        printPaths(std::cout, findPaths(from,to,3));
    }

    {
        // the questioners challenge graph
        Graph triangle;
        triangle.link(1,2,1);
        triangle.link(1,3,2);

        triangle.link(2,1,1);
        triangle.link(2,3,3);
        triangle.link(2,4,2);
        triangle.link(2,5,1);

        triangle.link(3,1,2);
        triangle.link(3,2,3);
        triangle.link(3,5,2);
        triangle.link(3,6,1);

        triangle.link(4,2,2);
        triangle.link(4,5,3);

        triangle.link(5,2,1);
        triangle.link(5,3,2);
        triangle.link(5,4,3);
        triangle.link(5,6,3);

        triangle.link(6,5,3);
        triangle.link(6,3,1);

        Node* from = triangle.get(1);
        Node* to   = triangle.get(6);

        printPaths(std::cout, findPaths(from,to,10));
    }
}

int main()
{
    test();
}




And output will look like this:
Results...
Results...
d:0 p:0
d:1 p:0-0
d:2 p:0-0-0
Results...
d:1 p:0-1
d:3 p:0-1-0-1
d:5 p:0-1-0-1-0-1
Results...
d:3 p:0-1-2-3
Results...
d:3 p:0-1-2-3
d:4 p:0-4-3
Results...
d:3 p:1-3-6
d:5 p:1-2-3-6
d:5 p:1-2-1-3-6
d:5 p:1-2-5-6
d:5 p:1-3-6-3-6
d:5 p:1-2-5-3-6
d:7 p:1-2-1-2-5-6
d:7 p:1-2-1-2-1-3-6
d:7 p:1-2-3-6-3-6
d:7 p:1-2-5-2-1-3-6

Wednesday, October 5, 2016

Adding math rendering via katex to blogger

First edit the Html version of the blogs template and add the following code before the end of the body

<!--KATEX RENDER BEGIN -->
<link href="https://khan.github.io/KaTeX/bower_components/katex/dist/katex.min.css" type="text/css" rel="stylesheet"/>
<script src="https://khan.github.io/KaTeX/bower_components/katex/dist/katex.min.js" type="text/javascript"></script>
<script language='javascript'>
//<![CDATA[ 
function renderLatex()
{
    var locs = document.getElementsByTagName("pre");
    for (var x = 0; x < locs.length; x++)
    {
        if (locs[x].className == "render:latex")
        {
            var eq =  locs[x].innerHTML;
            var div=document.createElement("div");
            div.className = "latexRendered";
            locs[x].parentNode.insertBefore(div,locs[x].nextSibling);

            try
            {
                katex.render("\\displaystyle{" + eq + "}", div);
                locs[x].className = "done:latex";
                locs[x].style = "display:none";
            }
            catch (err)
            {
                div.innerHTML = err;
                locs[x].className = "error:latex";
            }
        }
    }
}

renderLatex();
//]]>
</script>
<!-- KATEX RENDER END -->

You then blog using html tags like the following

<pre class="render:latex">y_i = \sum_iw_ix_i</pre>

And it should render like this

y_i = \sum_iw_ix_i


Thursday, September 22, 2016

intel drm driver issue and disk slowness fix in ubuntu / xbuntu 16

If you are running xubuntu of ubuntu and have a intel graphics card then the drivers are a bit broken.. This causes general slowness and the apperance of constant disk accesses and it seems to show most often after suspension wakeup.

You can confirm the issue by tailing dmesg:
dmesg -w

Your watching for this kind of message:
[20474.916101] ------------[ cut here ]------------
[20474.916133] WARNING: CPU: 3 PID: 999 at /build/linux-a2WvEb/linux-4.4.0/drivers/gpu/drm/drm_irq.c:1326 drm_wait_one_vblank+0x1b5/0x1c0 [drm]()
[20474.916135] vblank wait timed out on crtc 0
[20474.916136] Modules linked in: uas usb_storage nls_iso8859_1 mmc_block udf crc_itu_t drbg ansi_cprng ctr ccm i2400m_usb i2400m wimax intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp arc4 iwldvm uvcvideo kvm_intel kvm videobuf2_vmalloc videobuf2_memops mac80211 videobuf2_v4l2 videobuf2_core irqbypass v4l2_common videodev crct10dif_pclmul crc32_pclmul iwlwifi aesni_intel media snd_hda_codec_hdmi aes_x86_64 snd_hda_codec_realtek lrw snd_hda_codec_generic gf128mul snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq joydev glue_helper ablk_helper cryptd input_leds serio_raw snd_seq_device toshiba_acpi snd_timer sparse_keymap cfg80211 snd soundcore toshiba_bluetooth wmi toshiba_haps mei_me mei lpc_ich shpchp mac_hid tpm_infineon ip6t_REJECT
[20474.916188]  nf_reject_ipv6 nf_log_ipv6 xt_hl ip6t_rt nf_conntrack_ipv6 nf_defrag_ipv6 ipt_REJECT nf_reject_ipv4 nf_log_ipv4 nf_log_common xt_LOG xt_limit xt_tcpudp xt_addrtype nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack ip6table_filter ip6_tables nf_conntrack_netbios_ns nf_conntrack_broadcast nf_nat_ftp nf_nat nf_conntrack_ftp nf_conntrack iptable_filter ip_tables x_tables parport_pc ppdev lp parport autofs4 i915 psmouse i2c_algo_bit drm_kms_helper ahci syscopyarea sysfillrect libahci sysimgblt fb_sys_fops drm sdhci_pci e1000e sdhci ptp pps_core video fjes
[20474.916224] CPU: 3 PID: 999 Comm: Xorg Not tainted 4.4.0-36-generic #55-Ubuntu
[20474.916226] Hardware name: TOSHIBA dynabook R731/36EB/Portable PC, BIOS Version 3.60   01/24/2012
[20474.916228]  0000000000000286 00000000aa071208 ffff8800a46dbb08 ffffffff813f13b3
[20474.916231]  ffff8800a46dbb50 ffffffffc00cfb38 ffff8800a46dbb40 ffffffff810810f2
[20474.916234]  ffff88014a605000 0000000000000000 0000000000000000 00000000000e0368
[20474.916237] Call Trace:
[20474.916245]  [] dump_stack+0x63/0x90
[20474.916251]  [] warn_slowpath_common+0x82/0xc0
[20474.916254]  [] warn_slowpath_fmt+0x5c/0x80
[20474.916259]  [] ? finish_wait+0x55/0x70
[20474.916272]  [] drm_wait_one_vblank+0x1b5/0x1c0 [drm]
[20474.916276]  [] ? wake_atomic_t_function+0x60/0x60
[20474.916311]  [] intel_atomic_commit+0x43a/0x6f0 [i915]
[20474.916329]  [] ? drm_atomic_set_crtc_for_connector+0x6f/0xe0 [drm]
[20474.916346]  [] drm_atomic_commit+0x37/0x60 [drm]
[20474.916359]  [] drm_atomic_helper_set_config+0x76/0xb0 [drm_kms_helper]
[20474.916374]  [] drm_mode_set_config_internal+0x62/0x100 [drm]
[20474.916389]  [] drm_mode_setcrtc+0x3cc/0x4f0 [drm]
[20474.916402]  [] drm_ioctl+0x152/0x540 [drm]
[20474.916417]  [] ? drm_mode_setplane+0x1b0/0x1b0 [drm]
[20474.916421]  [] do_vfs_ioctl+0x29f/0x490
[20474.916424]  [] ? __sb_end_write+0x21/0x30
[20474.916428]  [] ? vfs_write+0x15d/0x1a0
[20474.916430]  [] SyS_ioctl+0x79/0x90
[20474.916434]  [] entry_SYSCALL_64_fastpath+0x16/0x71
[20474.916437] ---[ end trace a5e017619a02b625 ]---

If your getting hung by it and dont what to shutdown you can get your system moving again by doing this:
sudo pm-suspend 

The full solution is the removal of the offending driver:
sudo apt-get remove xserver-xorg-video-intel

Rasberry pi samba and git server setup

Purchase and assumable physical items

* model 3 pi + case
* 32gb mirco sd card and adapter
* usb portable disk
* a usb hub (the pi lacks sufficient power output to drive a usb powered hdd)




Setup Jesse on the micro sd

Note more info at: https://www.raspberrypi.org/documentation/installation/installing-images/

1) Download the RASPBIAN JESSIE LITE .img
https://www.raspberrypi.org/downloads/raspbian/

2) Get sd card mount point

df -h | egrep "mmc|sdd"

/dev/mmcblk0p1   30G   32K   30G   1% /media/kage/3535-3133

3) umount so the card so it can be imaged

umount /dev/mmcblk0p1 

4) write img to sd card

sudo dd bs=4M if=~/Downloads/2016-05-27-raspbian-jessie-lite.img of=/dev/mmcblk0
sync

5) correct sd card partition back to max size

sudo parted /dev/mmcblk0

(parted) p free

Model: SD SL32G (sd/mmc)
Disk /dev/mmcblk0: 31.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  Flags
32.3kB  4194kB  4162kB           Free Space
1      4194kB  70.3MB  66.1MB  primary  fat16        lba
2      70.3MB  31.9GB  31.8GB  primary  ext4
31.9GB  31.9GB  15.0MB           Free Space

(parted) resizepart 2 32G                                              
(parted) p free

Model: SD SL32G (sd/mmc)
Disk /dev/mmcblk0: 31.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  Flags
32.3kB  4194kB  4162kB           Free Space
1      4194kB  70.3MB  66.1MB  primary  fat16        lba
2      70.3MB  31.9GB  31.8GB  primary  ext4

6) Eject card and reinsert. confirm mounting and sizes are ok

setup passwordless ssh for pi user (assuming the machine your on is the accessor)


1) cd to the mounted sd card and into the pi home dir

cd /media/sd-card-uuid/
pushd .
cd home/pi

2) setup .ssh with the working machines publish ssh info
mkdir .ssh
chmod 0700 .ssh
touch .ssh/authorized_keys
chmod 600 .ssh/authorized_keys
cat ~/.ssh/id_rsa.pub  >> .ssh/authorized_keys

setup static ip


1) cd to the mounted sd card and into the /etc/networks area
popd .

2) modify the interface setup
sudo vi etc/network/interface

3) setup a static hard line ip im using 192.168.1.175 feel free to adjust as needed
iface eth0 inet static
   address 192.168.1.175
   netmask 255.255.255.0
   gateway 192.168.1.1
   network 192.168.1.0
   broadcast 192.168.1.255

3b) OR setup the wifi (i would not advise this)
sudo vi etc/wpa_supplicant/wpa_supplicant.conf

# add this at the end 
network={
  ssid="network id"
  psk="network password"
}

finish up raw img adjustments


1) sync and umount card and then eject the card

sync
umount /dev/mmcblk0p1 

2) Install sd card in the pi3. You insert the micro card into the card reader on the underside of the pi3 board, face out.
Power it up and check that it seems to booted

3) Now in back your work machine find out where it went

nmap -sP 192.168.1.0/24

4) If it booted at the correct address then great otherwise check your router setups

check access and secure the default password


1) ssh into the pi. Note the default password is "raspberry" but you shouldnt need it your ssh key is the access method.

ssh pi@192.168.1.175

2) Scramble the default password (I use a software vault, keepassx, to generate and store passwords) this way if someone gets in they cant just sudo with the default password and get root as well

passwd

clean up and secure the ssh access


1) Its an image so /etc/ssh identity files are already setup.. thats not very healthy lets rebuild them

sudo rm -f /etc/ssh/*key*
sudo dpkg-reconfigure openssh-server

2) Confirm the ssh is ok (for chmods etc)

ls -al ~/.ssh/authorized_keys

3) Now secure the ssh access to keys only

sudo vi  /etc/ssh/sshd_config

4) Edit/add the following lines

PasswordAuthentication no
 AllowTcpForwarding no
 X11Forwarding no

5) And confirm that ssh still works (with a second window).. Note if you make a mistake you can always turn off the pi, eject the sd card and edit it directly in your working PC.

update machine and fix various other image guff


1) first update the images software.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade

2) after that I seemed to be getting this mess with the locale being bad..

perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),

3) so fix it with:

sudo dpkg-reconfigure locales

Setup usb disk

for more info refer to: http://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/

1) find out what usb disk we have attached

sudo blkid 

/dev/sda1: LABEL="EC-PHU3" UUID="2E24054E24051B0B" TYPE="ntfs" PARTUUID="8418c874-01"

2) Right looks like a ntfs (but it could have also been vfat etc). So setup ntfs drivers

sudo apt-get install ntfs-3g 

3) make the mount point

sudo mkdir       /media/hdd
sudo chmod a+rwx /media/hdd

4) And check that it can actually mount

sudo mount -t auto /dev/sda1 /media/hdd

4a) ok.. stupid things are happening

modprobe: ERROR: ../libkmod/libkmod.c:557 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.11-v7+/modules.dep.bin'
ntfs-3g-mount: fuse device is missing, try 'modprobe fuse' as root

4b) well thats because.. it has the wrong /lib/modules version installed!!.. wtf!

ls /lib/modules/4.4.13-v7+/

4c) So after googling for a while turns out there is a surprising fix... most likely it was something in that first apt-get update cycle caused all the modules to move forward!

sudo reboot

pi@raspberrypi:/lib/modules $ uname -a
Linux raspberrypi 4.4.13-v7+ #894 SMP Mon Jun 13 13:13:27 BST 2016 armv7l GNU/Linux

4d) But something crazy happened to the ntfs-3g had to install it again??? no idea why..

sudo apt-get install ntfs-3g

5) now mount and test access

sudo mount -t ntfs-3g -o uid=pi,gid=pi /dev/sda1 /media/hdd
cd /media/hdd
touch abc.txt
ls 
rm /media/hdd/abc.txt

6) if that worked and u can see the test file move on otherwise start googling..

Setup usb disk at permanent location


1) now we are going to make the usb disk a permanent fixture, umount it

sudo umount /media/hdd

2) update /etc/fstab. We are going make all files owned by user pi and group users, read and writable
extra info at
* https://www.howtoforge.com/reducing-disk-io-by-mounting-partitions-with-noatime
* http://raspi.tv/2012/how-to-mount-and-use-a-usb-hard-disk-with-the-raspberry-pi
* http://www.omaroid.com/fstab-permission-masks-explained/
* note nobootwait doesnt work???

sudo vi etc/fstab

2a) and add the following line
/dev/sda1       /media/hdd    ntfs-3g noatime,umask=0,uid=pi,gid=users,dmask=0007,fmask=0117 0 0

2b) or you can use something like:
/dev/sda1 /media/hdd vfat uid=pi,gid=users,umask=0022,sync,auto,nosuid,rw,nouser 0 0

3) and then mount it
sudo mount /dev/sda1

4) test access

touch /media/hdd/abc.txt
rm /media/hdd/abc.txt

setup samba for remote share


1) ok build samba share location...

mkdir /media/hdd/share

2) install samba

sudo apt-get install samba samba-common-bin libpam-smbpass

3) Configure samba

sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.20160922
sudo vi /etc/samba/smb.conf

4) insert or uncomment the following in the Authorization section

security = user
unix password sync = yes

5) comment out homes,printers sections.. its junk we dont want or need

6) insert "share" section

[Share]
  comment = 1TB_samba
  path = /media/hdd/share
  valid users = remote
  read only = No

7) bounce the samba service

sudo /etc/init.d/samba restart

8) check its setup

testparm -s

9) now in the working pc with what ever file manager u use "browse" your network and locate the pi likely called "raspberrypi" open it and confirm there are no "printers" stuff and "share" is visible

10) try to access the "share" using "guest" access, then as the user "pi" with the default password, your new password and confirm all of this has no access

11) back in the pi. Make the "remote" user and passwd it. Again password vault scramble a password for it.. but make certain its type-able
For more info refer to http://raspi.tv/2012/how-to-create-a-new-user-on-raspberry-pi

sudo adduser remote
sudo usermod -a -G users remote
groups remote
sudo smbpasswd -e remote

12) then bounce samba

sudo /etc/init.d/samba restart

13) then back in the working PC retest the samba share using the user "remote" and there unix password and confirm file/dir create/delete etc. Also check the guest and "pi" users again to be certain..

14) then reboot and check mount stays put and samba works over the reboot

sudo reboot

Setup the git ssh server


1) make the repos storage location

mkdir /media/hdd/repos

2) install git

sudo apt-get install git-core

3) add a git user

sudo adduser git
sudo usermod -a -G users git
groups git

4) switch over to the new "git" user

#cat *pi* users access keys first so u can copy it later for step 6
cat .ssh/authorized_keys

su git
cd ~

5) confirm sudo limits.. (should fail)

sudo ls

6) setup the "git" users ssh access

mkdir .ssh
chmod 0700 .ssh
touch .ssh/authorized_keys
chmod 600 .ssh/authorized_keys
vi ~/.ssh/authorized_keys

7) from your working pc confirm ssh access

ssh git@192.168.1.175

8) link the repos storage to the git users home

ln -s /media/hdd/repos repos

9) create a test repo

cd ~/repos 
git init --bare test.git

9a) Note ignore chmod errors (we have it forced to a certian way in fstab)
error: chmod on /media/hdd/repos/test.git/config.lock failed: Operation not permitted
error: chmod on /media/hdd/repos/test.git/config.lock failed: Operation not permitted

Note to self. There might be an issue with executable scripts loosing x permission due to this...may have to rethink this.

10) Then on your working machine clone and check that the repo worked

git clone git@:~/repos/test.git 
cd test/
ls
touch test.txt
git add test.txt 
git commit -a -m "Initial"
git push

11) then reclone and confirm test.txt is there in the new clone

cd ..
git clone git@192.168.1.4:~/repos/test.git test2
cd test2
ls 

setup automatic updating


now we know all the basics are working... lets harden it a bit

1) setup auto updates
for more info see https://unixsysdoc.wordpress.com/2015/05/18/xubuntu-security-and-hardening-part-i-basics/

sudo apt-get install unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades

disable needless hardware


1) disable wifi and bluetooth (as im using the hard line)

sudo touch /etc/modprobe.d/disable_rpi3_wifi_bt.conf
sudo vi /etc/modprobe.d/disable_rpi3_wifi_bt.conf

2) add the lines

##wifi
blacklist brcmfmac
blacklist brcmutil
##blue tooth
blacklist btbcm
blacklist hci_uart

3) and reboot

sudo reboot

4) log back in to the pi and confirm wifi is off

ifconfig

setup the firewall

for more info check; https://www.digitalocean.com/community/tutorials/how-to-setup-a-firewall-with-ufw-on-an-ubuntu-and-debian-cloud-server

1) install ufw.. its an easy to use firewall

sudo apt-get install ufw 

2) let in local ssh

sudo ufw allow ssh

3) let in local samba (and bonjour)
for more info check: https://ubuntuforums.org/showthread.php?t=806000

sudo ufw allow proto tcp to any port 135 from 192.168.0.0/16
sudo ufw allow proto udp to any port 137 from 192.168.0.0/16
sudo ufw allow proto udp to any port 138 from 192.168.0.0/16
sudo ufw allow proto tcp to any port 139 from 192.168.0.0/16
sudo ufw allow proto tcp to any port 445 from 192.168.0.0/16
sudo ufw allow proto udp to any port 5353 from 192.168.0.0/16 

4) and turn it on

sudo ufw enable

5) then retest all the samba acces and git cloning

Setup the hostname to something better


1) adjust the hostname so you know which machine your in replace "raspberrypi" with the name you want

sudo vi /etc/hostname
sudo vi /etc/hosts

2) and reboot

sudo reboot

Clean up


1) clean up the working pc git clones

rm -rf test
rm -rf test2

2) go back to the pi clean out the test and import/setup your real ones

rm test.git

bonus notes..


1) i trialed wake on lan

sudo apt-get install ethtool
ethtool eth0 | grep wake
ethtool -s eth0 wol g

# it responds with bad news
Cannot get wake-on-lan settings: Operation not permitted