Year: 2019-present
Technologies: Python, C++, AWS, Javascript
I am currently working as a member of the Imaging Sciences team at Amazon, which is focused on automating 3D
content creation at scale. A simple way to phrase our mission is: we aim to create a 3D model for each product
in Amazon's catalog.
The work involves close collaboration with research scientists to develop new machine learning and computer
vision techniques for the reconstruction of 3D models.
Responsibilities include:
Design, review, and guiding the implementation of technical infrastructure for validating the
effectiveness of experimental techniques
Coordination with partner teams to integrate proven infrastructure into production environments
Year: 2016-2019
Technologies: Java, Spring, Ruby, AWS
Front-end development for Vendor Self-Services, a part of Amazon's Seller Central website that allows vendors to manage the
products they sell to Amazon (so that Amazon can sell them to end customers). The goal of the team was to create
a fully automated, data-driven generation of a web form that allows vendors to list their products on Amazon.
While this may sound simple, the work involved integrating a large number of upstream services into a simple,
maintainable web page, such that vendors could list new items in a self-service manner.
My responsibilities on this team included:
Implementation, design, and review of complex features
Year: 2015-2016
Technologies: C++, OpenGL, GLSL, Qt
A new product being developed by Medicim Nobel Biocare for a partner company KLS Martin. This desktop
application allows maxillo facial surgeons to plan their surgeries in 3D. Some of the more important features
are:
Generation of a high-quality 3D model based on patient DICOM data
Tools for creating clinical cuts in the patient 3D model
Movement tools that allow precise and accurate movement of the cut parts
A tool for generating a 3D template design that can be 3D printed for use in surgeries
This screenshot shows the 3D workspace of IPS CaseDesigner. It shows the parts of the mandibula and maxilla
bones created by clinical cuts, which can be moved by the surgeon in order to determine the ideal position.
After this movement, the surgeon will be able to generate a 3D model of a template which can be 3D printed
for use during surgery.
During my time on this project, I acted as a software team lead with the following responsibilities:
Ensuring that features are developed in a timely fashion
Scope features that would be implemented by the team
Design, implement and test complex software features
Year: 2013-2015
Technologies: C++, OpenGL, GLSL, Qt
NobelClinician
is currently one of the products of Medicim Nobel Biocare. It provides a clinician with various tools to
carefully plan a dental implant surgery. For example: the software features high-quality volume rendering of
X-ray data for diagnostic purposes which can be used by the clinician to create a virtual setup of implants that
will be converted into a 3D template shape with millimeter precision (which in turn can be ordered for
production in Nobel Biocare's laboratories).
This screenshot shows one of the diagnostic workspaces of NobelClinician. At the top you can see the
panoramic relice viewer, the cross-sectional reslice viewer is shown in the bottom left corner and the 3D
viewer in the bottom right corner. Each of these viewers provide visualizations of the nerves, the implants
and the generated 3D template.
I have been part of the development for two years and in that time I have had several responsibilities including
the following:
Implementation of various algorithms: e.g. calculation of a 3D distance map, registration of 3D surface
onto (volumetric) image, volume rendering of X-ray data, etc.
Design of various components in the software.
Enabling the use of unit tests in the software.
Acting as team lead on a two month project, which included extra tasks (in addition to the normal
responsibilities as a developer) such as making sure all feature are finished on time, division of the
work over the developers in the team and planning tasks for the next iteration.
A promotional video showing the workflow of OsseoCare Pro: first the clinician plans the surgery using
NobelClinician, then he/she makes it available to OsseoCare Pro via the online cloud-platform NobelConnect.
Once the surgery starts, the clinician can download the planning onto the iPad and start drilling.
This was a very cool project that was launched to facilitate dental implant surgery by enabling the surgeon to
connect an iPad to the actual drilling machine. To accomplish this, we collaborated with another company that
built the drilling machine (with iPad interface) and provided us with the necessary tools to control the drill
from inside the iOS
app that we were developing. The goal of the app was to make it possible for the surgeon to plan the
surgery in NobelClinician (on the desktop) and then download that planning onto the iPad. The app would then
nicely display all the steps required to execute the planning, while also configuring the drilling machine
accordingly (actual control over the drill was still left up to the surgeon for safety purposes).
I was involved in developing various parts of the application, which included implementation of the UI,
integrating the drilling machine library into the application (in such a way that it could easily be upgraded
when new versions became available) and the implementation of a "report" feature that enabled the clinician to
export important surgery data to a PDF file.
A promotional video showing how the NobelClinician app helps clinicians communicate to a patient how the
dental implant surgery will proceed.
Two months after I started working for Medicim Nobel Biocare, I was assigned to team that was tasked with
creating a mobile version of the NobelClinician software. The goal of the app is to make
it easier for clinicians to communicate medical information and surgery planning data to patients. For instance,
the app allows the clinician to show the patient how the implant surgery will be executed using a (pseudo) 3D
viewer and X-ray images. For legal purposes, a signature feature was also added allowing the patient to sign off
on the procedure.
The pseudo 3D view of the app. The user can drag up/down or left/right on the view and the model will turn
in the corresponding direction. Alternatively, the user can also tap one of the listed items in the left
column, which will cause the 3D view to center on the selected object.
My responsibilities for this project included all aspects of the application. Amongst other things, I
implemented the rendering code for the signature and annotation features (which converted the "raw" drawing into
a fluent curved spline), I built the UI from design made by the functional team and I worked on the
pseudo-rendering of the 3D model (volume rendering was not possible at that time, so we opted for a screenshot
based approach which gave the illusion of 3D rotation).
Year: 2011
Technologies: Java, Android SDK
In 2011 our team was temporarily assigned a high-priority project that consisted of making a prototype that
could prove whether or not it was possible to port the GIS rendering API of Luciad's flagship product
(LuciadMap) to mobile devices running Android. Because development for Android is also done in Java, it turned
out to be quite easy to get up and running quickly: most of the existing API could be reused and only a limited
number of rendering classes needed to be rewritten to work with the android SDK.
In this project I was responsible for creating the demo application itself, which included creating the
application, setting up the client-side code of some custom networking features for the demo and making sure the
app was very stable (prospective customers would get a chance to play around with the app on their personal
devices).
Despite the short time frame of one month, we succeeded in delivering a very stable demo application that
impressed management and customers so much that a new product was launched under the name LuciadMobile. This is one of the projects I am very proud
to have worked on because despite the lack of time, our team really pulled together to create a fully-featured
stable demo app developed on a platform none of us had ever worked with befores.
Year: 2009-2012
Technologies: Java, OpenGL, GLSL, OpenCL
My first professional software project: Luciad wanted to create a new
hardware-accelerated version of what was at that time their flagship product: LuciadMap. The main goal was to
leverage the power of the GPU so that the new product would exhibit a vast performance improvement over
LuciadMap's software rendering engine while still supporting the same features. An additional benefit of using
hardware rendering was the opportunity to implement features that had not been feasible up until that point in
time (e.g. due to the algorithm being too demanding for the CPU in combination with real-time rendering).
Screenshot of the Line-Of-Sight (LOS) feature of LuciadLightspeed. Initially implemented as a CPU algorithm,
it took about 1 minute to compute a detailed LOS radius. By leveraging the parallel computing power of the
GPU (via OpenCL), we were able to make it work in real-time (it took more or less 10ms to compute the same
LOS radius).
In the first year of the LuciadLightSpeed project, our
team was tasked to build a prototype that could serve as a proof-of-concept. During this time, I worked on
several parts of the prototype such as the implementation of an octree datastructure, porting the existing
line-of-sight algorithm to the GPU using OpenCL and building a demo application from scratch that could be used
to impress customers (and management) with the various improvements that resulted from the prototype phase.
Screenshot of the projective texturing feature of LuciadLightspeed being used in the demo application: in
the bottom-left corner a video feed is shown and in the 3D rendered view a model of a UAV is shown flying
over the same 2D video feed being projected onto the terrain's 3D geometry.
Once the prototype was deemed a success, the project moved into the production phase. That is, our focus turned
towards creating an easy-to-use API that closely resembled the existing API of LuciadMap so that customers could
migrate effortlessly towards the new product. My responsibilities in this phase included designing parts of the
API, writing documentation, fixing bugs and further polishing the algorithms I had worked on during prototyping
to make them production ready.