Author Archives: DamienJadeDuff

Masters Scholarships in Deep Learning & Robotics

Applications for this position are now closed

Masters Scholarships in Deep Learning & Robotics
Istanbul Technical University Faculty of Computer and Informatics Engineering

The Project:

  • Neural networks for robot vision & action.
  • Real robots, state-of-the-art vision.
  • Artificial intelligence.

Will suit those interested in:

  • Artificial intelligence.
  • Machine learning.
  • Computer vision.
  • Robotics.

Responsibilities:

  • Development and training of neural network architectures.
  • Application to simulated and real mobile robots.
  • Deep learning frameworks (Keras, tensorflow) – Python.
  • Robot Operating System (ROS).
  • Computer vision algorithms development & implementation.
  • Obstacle avoidance, navigation & mapping.

To apply or inquire, email djduff@itu.edu.tr.

Some Common English Errors By Native Turkish Speakers

In my job as a lecturer in Computer Engineering in Turkey, I frequently have the opportunity to read the writings of native speakers of Turkish who have English as a second language. Here I have compiled some of the typical deviations from canonical English (if there is any such thing); the hope is that Turkish speakers of English can examine these and improve their writing.

Please do not use the below as a reference for how Turkish native speakers may use English in comparison to native speakers of other languages – the target of this post is Turkish speakers. Many of the patterns of usage will be the same for native speakers of other languages as many stem from peculiarities of English.

İlgilenmek/be interested in/get involved in

The problematic text: I want to be interested in robotics.

The correct text: I am interested in robotics and I want to get involved in it.

Comment: Here the word ilgilenmek has multiple meanings that become separate words in English. Involved and interested contain these separate meanings.

There is not any

I have deleted this item as the correct usage seems to depend a lot on the region of English used. However, the idiomatic phrase might not be as expected.

Homeworks / homework / assignments

The problematic text: We have too many homeworks.

The correct text: We have too much homework / too many assignments.

Comment: Homework is an uncountable noun – expressing continuous not discrete quantities. Assignments are countable.

By using

I have deleted this item as the issue was more related to a specific example of text and the example I had was not appropriate. With thanks to the individual who challenged me on this.

Kontrol / control / checking

The problematic text: This will be controlled by human resources.

The correct text: This will be checked by human resources.

Comment: Although kontrol would seem to translate directly into control, it more often would be translate as check. The English to control frequently translates into Turkish as denetlemek.

 

Masters Scholarship 2014

2 Year Masters Scholarship in Sensory Augmentation


This scholarship is now closed
Though enthusiastic individuals are welcome to make contact about it.


 

Eye4We have a first-stages project about giving people extra senses using new technologies. We will use depth cameras, such as time-of-flight cameras, to provide 3D data that will be delivered as a soundscape.

Our project is well-specified, funded, and we just need the right person to help us work on it.

This will appeal to those interested in:

  • Artificial intelligence and machine learning.

  • Psychology & cybernetics.

  • Assistive technologies.

  • Spatial audio and signal processing.

To apply, email your CV along with a short message summarising your relevant interests and experience, to djduff@itu.edu.tr.

Main Responsibilities

  • Develop physical user interface and sensory augmentation prototype with Arduino.

  • Process sensory 3D data in real time with Point Cloud Library.

  • Produce spatial soundscapes using HARK and SLAB3D.

Details

Application Deadline: December 31, 2014.

Estimated Duration: January/February 2015 – January/February 2017.

Scholarship amount: 1500TL/month, not taxed.

Main Requirements: Registered full-time for a Masters degree in Turkey.

Principle Investigator: Damien Jade Duff, http://djduff.net

Researcher: Gökhan İnce, http://www.gokhanince.com

 

 

What is a PhD?

Science is a room full of toys and a PhD is where you take a screwdriver to one of them, bash it until the batteries fall out, spend many years trying to get the batteries back in without breaking it, mangling it even further, swap screwdriver for a mallet, ultimately give up, ram the screwdriver in a doll’s eyesocket, take the photo and have it laminated, then become exceedingly interested in a speck of dust on the windowsill until someone has mercy and shoots you.

At least, in theory.




Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Being a baby is a bit like playing a text adventure


You are in a room. There are objects scattered around. One of them is blue. One of them is a circle. There is a face here.

What do you do?

> Put circle in face.

The face is not a container. Please try again.

The face is now sad-looking.

You see a hole of some kind.

> Use blue thing in hole.

It won't fit.

> Use blue thing in hole.

It fit, but now it's stuck!

> Pull blue thing from hole.

It won't come out!

> Use circle on blue thing.

Now they're all stuck.

> West

You don't know that direction.

> Forward

You are in a room. There are objects scattered around. One of them is blue. One of them is a circle. They are stuck in a hole. There is a face here.

> Hit circle.

It made a satisfying bomp sound.

> Hit face.

Something happened!

> Dig.

Success! You have acquired:

- A thread of something.

> Use thread.

You don't know how to do that. The thread is in your mouth.

> Take thread.

Well done, you took the thread! Your hand is in your mouth, with the thread.

> Examine world.

You are in a room. You are now hungry. You have started to cry.

> Forward.

It doesn't work. You are hungry. You are crying.

> Cry.

Yes.

 

 

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Perceiving while planning

Consider what the eyes are doing when involved in the solving of a jigsaw puzzle. While the mind is darting about, imagining placement possibilities, considering combinations, and pondering strategies, the eyes too are darting from place to place over the puzzle, examining pieces relevant to a considered placement, checking edges for compatibility, and studying the layout. The eyes are responding to the deliberation of the mind, checking expectations and seeking out necessary information. They are assuring that the deliberation is rooted in the physical reality of the problem. [1]

This thought motivates work [1,2] that I presented yesterday at the Knowledge Representation and Reasoning for Robotics workshop at ICLP 2013 in Istanbul (work that results from the current TÜBİTAK-funded leg of my collaboration with Sabancı University Cognitive Robotics laboratory).

There is lots and lots of lovely work on planning for perceiving, planning under uncertainty, and planning in a changing world, topics that are of immediate interest to roboticists. In the current work we take a different angle and, for a while, do away with those considerations and just concentrate on what is the best way of integrating perceptual processing with classical planning, in terms of conceptual simplicity and in terms of efficiency. It is rather preliminary work.

For more information, check out the paper [1] or the slides [2] from yesterday’s talk, where I go into the motivation behind planning for manipulation and why mobile manipulation is a ripe topic; also the graphics are pretty. Unfortunately the movies will only be embedded if you use Libre/Open-Office (there is a PDF version too).

[1] Duff, D. J., Erdem, E., & Patoglu, V. (2013). Integration of 3D Object Recognition and Planning for Robotic Manipulation: A Preliminary Report. ICLP-KRR 2013, Istanbul. Retrieved from http://arxiv-web3.library.cornell.edu/abs/1307.7466

[2] http://files.djduff.net/Presentations/KRRR2013FinalPresentation.tar.gz (slides)

 

A tool for labelling the pose of objects in 3D monocular videos

labellerpicAs a part of my project on monocular 3D model-based tracking, I created a GUI tool for labelling poses in video sequences. The tool will interpolate between labelled frames, reducing the work required by the user in labelling the frames.

The idea is to use this labelled data with image-space error measures, since the eye of the labeller will almost certainly not distinguish the depths of objects with any degree of certainty. This is okay because in the first instance we are interested in human-level performance. This idea is discussed further in sections 3.7 and 7.2 of my thesis.

The labelling tool can be found here:

https://bitbucket.org/damienjadeduff/label_vid_3d

If you make use of this software, please cite our paper:

Duff, D. J., Mörwald, T., Stolkin, R., & Wyatt, J. (2011). Physical simulation for monocular 3D model based tracking. In Proceedings of the IEEE International Conference on Robotics and Automation. Presented at the ICRA, Shanghai, China: IEEE. doi:doi:10.1109/ICRA.2011.5980535 – Preprint available at eprints.bham.ac.uk/978.

University of Birmingham School of Computer Science – Picture

Pencilsed image processed image of the University of Birmingham School of Computer Science

An image of the University of Birmingham School of Computer Science and clock-tower under snow from 2006. The photograph has been pencilised by undergoing some torture by a Matlab script.

Creative Commons Licence
You are welcome to use this work under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Project: Integration of sensing, planning and action

Since September I have been working at the Sabancı Üniversitesi Cognitive Robotics Laboratory (CogRobo Lab) on a new project supported by the TÜBİTAK 2216 international post-doctoral fund. I was attracted to the CogRobo Lab because the work there focuses on the interface between “high-level” and “low-level” reasoning, and I am particularly interested in the relationship between physical and abstract cognition. The CogRobo Lab is also ambitious in terms of the potential applications of the technology we are developing there.

The premise on which the CogRobo Lab was founded is the premise that logic programming is a software engineering tool with a lot of promise in allowing the specification of robot problems and the generation of abstract solutions to them. This promise relies in turn on logic databases’ promise of extensibility (roughly, due to the ability to add new facts to knowledge-bases), and the existence of increasingly powerful general purpose reasoners. Action planning is one area where logic programming is customarily applied in a robotics domain.

Unfortunately, in order to make such an approach work in real systems a lot of work must be done to bring these abstract formalisms into the real-world; they are difficult to apply in practice due to the vast number of “low-level” behaviours or computations necessary to make them work, and frequently these low-level computations are brought in as a range of special-purpose hacks, and when they are not hacks they do tend to remain special-purpose due to the time-consuming nature of their development (and due to the fact that there are many difficult problems still to be solved in this area).

As such the approach at the CogRobo Lab is to try and realise the promise of logic programming by exploring the necessary interfaces between the general purpose reasoning mechanisms and more special purpose ones. The idea is to create general purpose components and interfaces that can support logic programming as a robot programming paradigm. In the area of planning, hybrid logic programming is the name for this area of research and one approach to it particularly explored at present at the Cogrobo Lab is that of the external predicate or semantic attachment.

My own research at the lab is about investigating ways in which sensory processing components can be constructed so that they can be re-used by logic programs, so that they are reflexive to the information needed by planning (so that unnecessary sensory computation is not done, giving more room for more relevant expensive computation), and so that they are able to adapt to and possibly make use of low-bandwidth information supplied through the “high-level” interfaces (for example, verbal knowledge). Furthermore, they must be integrated with geometric, kinematic and dynamic reasoning (for example, motion planning, object stability analysis).

To this end I am investigating different paradigms of data association between sensory and high-level components, different procedural models of integration with planning, as well as reflexive sensory computation. All of this is in the context of mobile manipulation with a Kinect-equipped Kuka YouBot within the domain of the Robocup@Work competition, with difficult constrained stacking (e.g. shelving) problems.

The CogRobo Lab has its web-page here:
http://cogrobo.sabanciuniv.edu/

CCalc/SWI Prolog/Point Cloud Library — external predicate example

While I was preparing a guest lecture, I created the following example of a SWI Prolog external predicate interfacing into the Point Cloud Library (PCL). A CCalc (action language interpreter built with SWI Prolog) example is provided also. The readme should do the explaining.

The toy example given is multi-robot path planning with obstacles obtained by finding the ground plane and then counting points above the plane (the example uses cropped Kinect data from one of the PCL sample files).

https://bitbucket.org/damienjadeduff/testpcl_cglab_sabanci