Using an Android device as a webcam for Linux

This guide shows how to use any Android device as a webcam on a Linux system. It becomes very handy for when you are like “oh shit, i’m late for my video meeting”, or when you just dont want to bother with aquire additional hardware. There exists no updated software for dealing with this as the only solution i’ve come across is Droidcam. Droidcam only works for older Ubuntu versions sadly. But fear not, because the solution is quite simple.

As a quick note: This was tested on Ubuntu 17.10, but should be compatible with most Linux based systems.

Prerequisites:

  • GIT
  • ffmpeg
  • c compiler (build-essential)
  • Your Android device must be on the same network as your computer

1. Install dependencies

sudo apt-get install git build-essential

2. Install FFMPEG, (You may need to compile this for some distributions)

sudo apt-get install ffmpeg

3. Install v4l2 loopback driver

# Clone
cd /opt/ && git clone https://github.com/umlaeute/v4l2loopback

# Install
cd v4l2loopback && sudo make && sudo make install

# Load Kernel Module videodev
sudo modprobe videodev

# Just incase you have loaded v4l2 previously
sudo rmmod v4l2loopback

# Load v4l2 module
sudo insmod ./v4l2loopback.ko exclusive_caps=1


4. Find your newly created video device:

find /dev/ -name "video*"

# Output (Example, may be different, video0-9):
/dev/video1

 

We are now ready to set up your Android device!

5. Go to Google Play and Install IP Webcam (

6. Scroll down and press “Start Server”

Your device are now streaming your webcam via whatever protocol you set it to use. Note down the IP adress and the port of your webcam steam. Now back to your linux machine!

7. Download the existing (WOHO!) webcam hook:

cd ~/
wget https://raw.githubusercontent.com/bluezio/ipwebcam-gst/master/prepare-videochat.sh
chmod +x

8. Edit the file and find the line saying WIFI_IP. Fill in your Android device IP.

9. Run prepare-videochat.sh

./prepare-videochat.sh &

You should now be up and running with a brand new webcam for your Linux machine. You can verify that it is working at https://www.onlinemictest.com/webcam-test/

 

 

 

 

 

Move from block partition to lvm partition on Ubuntu 17.10

This tutorial shows how to move the Ubuntu filesystem from a regular block partition, to a LVM2 Partition. This may be necessary to do if you are silly like me and forget to select LVM when installing your Ubuntu system. This guide assume that you have a secondary hard-drive/partition with equal or more space than your old block partition. Lets get on with it!

First, log in as root. Ah, and i take no responsibility if you mess things up 🙂

1. Identify which hard-drive you will use for your LVM Partition. Create a partition. In my case i used /dev/sda and created a partition at /dev/sda2. I used the graphical interface that ships with Ubuntu Mate, but you can also use parted/gparted etc.

2. Create a new LVM parition

 pvcreate /dev/sda2

3. Create a new Logical Group. You can call it whatever. I call mine vg_root.

 vgcreate vg_root /dev/sda2

4. Create a new Logical Volume. The -L argument specifies the volume size. In my case i create a 60 GB volume. As mentioned, it should be larger than your old filesystem

 lvcreate -L 60G -n lv0 vg_root

5. Format the partition with ext4.

 mkfs.ext4 /dev/vg_root/lv0

6. Mount the newly created volume

mkdir /mnt/new_root
mount /dev/vg_root/lv0 /mnt/new_root/

7. Copy old filesystem to the mounted location

cp -ax / /mnt/new_root/
cp -ax /boot /mnt/new_root/

8. Edit /etc/fstab. Comment out the old / mount point and add

/dev/vg_root/lv0 / ext4 errors=remount-ro 0

9. Mount new filesystem

mount -o bind /dev /mnt/new_root/dev
mount -t proc none /mnt/new_root/proc
mount -t sysfs none /mnt/new_root/sys
cd /
chroot .
cd /mnt/new_root/
chroot .

10. Update Grub

 update-grub

 

11. Reboot

You should now be up and running on your newly create LVM partition

Oh, so you fucked up? If you managed to do the copy procedure correctly, you can still save this. Boot up ubuntu from live-cd

12. Install Boot-Repair

sudo add-apt-repository ppa:yannubuntu/boot-repair
sudo apt-get update
sudo apt-get install -y boot-repair && boot-repair

13. Follow instructions carefully

14. Reboot

Now you should be up and running!

Master Thesis: Deep Reinforcement Learning using Capsules in Advanced Game Environments

I just finished my Master’s Thesis at University of Agder. Read it in full here.

Abstract:

Reinforcement Learning (RL) is a research area that has blossomed tremendously in recent years and has shown remarkable potential for artificial intelligence based opponents in computer games. This success is primarily due to vast capabilities of Convolutional Neural Networks (ConvNet), enabling algorithms to extract useful information from noisy environments. Capsule Network (CapsNet) is a recent introduction to the Deep Learning algorithm group and has only barely begun to be explored. The network is an architecture for image classification, with superior performance for classification of the MNIST dataset. CapsNets have not been explored beyond image classification.
This thesis introduces the use of CapsNet for Q-Learning based game algorithms. To successfully apply CapsNet in advanced game play, three main contributions follow. First, the introduction of four new game environments as frameworks for RL research with increasing complexity, namely Flash RL, Deep Line Wars, Deep RTS, and Deep Maze. These environments fill the gap between relatively simple and more complex game environments available for RL research and are in the thesis used to test and explore the CapsNet behavior.
Second, the thesis introduces a generative modeling approach to produce artificial training data for use in Deep Learning models including CapsNets. We empirically show that conditional generative modeling can successfully generate game data of sufficient quality to train a Deep Q-Network well.
Third, we show that CapsNet is a reliable architecture for Deep Q-Learning based algorithms for game AI. A capsule is a group of neurons that determine the presence of objects in the data and is in the literature shown to increase the robustness of training and predictions while lowering the amount training data needed. It should, therefore, be ideally suited for game plays.

AI2017: Towards a Deep Reinforcement Learning Approach for Tower Line Wars

I just published a paper to AI-2017. Read it in full here!

Abstract

There have been numerous breakthroughs with reinforcement learning in the recent years, perhaps most notably on Deep Reinforcement Learning successfully playing and winning relatively advanced computer games. There is undoubtedly an anticipation that Deep Reinforcement Learning will play a major role when the first AI masters the complicated game plays needed to beat a professional Real-Time Strategy game player. For this to be possible, there needs to be a game environment that targets and fosters AI research, and specifically Deep Reinforcement Learning. Some game environments already exist, however, these are either overly simplistic such as Atari 2600 or complex such as Starcraft II from Blizzard Entertainment. We propose a game environment in between Atari 2600 and Starcraft II, particularly targeting Deep Reinforcement Learning algorithm research. The environment is a variant of Tower Line Wars from Warcraft III, Blizzard Entertainment. Further, as a proof of concept that the environment can harbor Deep Reinforcement algorithms, we propose and apply a Deep Q-Reinforcement architecture. The architecture simplifies the state space so that it is applicable to Q-learning, and in turn improves performance compared to current state-of-the-art methods. Our experiments show that the proposed architecture can learn to play the environment well, and score 33% better than standard Deep Q-learning which in turn proves the usefulness of the game environment.

NIK2017: FlashRL: A Reinforcement Learning Platform for Flash Games

I just published a paper to NIK2017. Read it in full here!

Abstract

Reinforcement Learning (RL) is a research area that has blossomed tremendously in recent years and has shown remarkable potential in among others successfully playing computer games. However, there only exists a few game platforms that provide diversity in tasks and state- space needed to advance RL algorithms. The existing platforms offer RL access to Atari- and a few web-based games, but no platform fully expose access to Flash games. This is unfortunate because applying RL to Flash games have potential to push the research of RL algorithms.

This paper introduces the Flash Reinforcement Learning platform (FlashRL) which attempts to fill this gap by providing an environment for thousands of Flash games on a novel platform for Flash automation. It opens up easy experimentation with RL algorithms for Flash games, which has previously been challenging. The platform shows excellent performance with as little as 5% CPU utilization on consumer hardware. It shows promising results for novel reinforcement learning algorithms.

Source code can be found here

 

 

LineWars: Reinforcement Learning idea

Now that DeepRTS engine is in a stable state and i’m ready to research reinforcement learning algorithms for it i need a new side project. Currently i’m planning to create a web-based VNC compatible game which is based of Hero Line Warsa Warcraft III modification.

The objective of this game is to control a hero unit which defends your base.  You defend your base by killing off enemies spawned by the opposing player. Secondary objective of the game is to send units to the opposing player, attempting to overrun him. If you succeed to overrun the opposing player, your units destroys his base and you win the game.

This game should be fairly simple to implement, and hopefully it will only require ~1000 lines of code to implement if with the logic engine and graphics. The reason for implementing such game is that it has a reduced State and Action space compared to DeepRTS and other RTS games, but it is still fairly complex to master.

DeepRTS and ML-Algorithms

In the beginning of 2017, i started researching how to apply tree-search algorithms to real-time-strategy games. microRTS is implementation of a RTS game in its simplest form and allows for research in various areas of machine learning.

I developed numerous Monte-Carlo based tree searches and it gave good results in microRTS. I figured that microRTS was to simple, and i started work on a new engine based on the principles of Warcraft 2. This implementation had a variable complexity level based on which features i enabled in the configuration file.

First version of this game was developed in Python.

When version 1.0 was complete i stumbled upon issues with perfromance. Python was simply not fast enough to support Tree-Search algorithms and yielded only 40~ nodes visited per game frame. For algorithms utilizing the GPU this was not an issue, but CPU bound tasks were a big problem. Starting to optimize the game engine, i utilized Cython which compiles Python to C/C++. Using Cython yielded very good results but at the cost of reduced debugging capabilities. This made further development hard and the engine rewritten in pure C++

The new C++ implementation were much faster and it also embeds Python so that libraries like Tensorflow, Keras and Theano can be utilized for machine learning. Furthermore, it increases the tree-search performance to 10000~ node visits per game frame which is a huge boost from the python implementation.

The game currently has 4 algorithms implemented: Deep Q-Network, Monte-Carle-Tree-Search, Monte-Carlo-Action-Search, Monte-Carle-Search-Direct. Each of the MC algorithms are just different ways of interpreting the score for each node, thus giving very different results.

As we can see from the graph. The plain MCTS outperforms my attempts to make “shortcuts” in the algorithm. MCTSDirect simply skips some intermediate nodes which it attempts to classify as “useless” while MCAS attempts to build a Q-Table of which action should be done to which time.

DQN is a DeepMind’s implementation of a AI for Atari Games. The reason this does not perform well is becuase of the data-representation and depth/size of the network.