CeeMat
Dynamic Seeing Tools for Learning, Understanding, and Prototyping Physical Computation
For the final project in my Environments Studio I, during Fall 2016, we were challenged to improve the student experience of Carnegie Mellon's IDeATe – short for Integrative Design, Arts, and Technology Network – studios located in Hunt Library. Our proposal, CeeMat improves the experience of learning, working, and prototyping with Physical Computation.



Why did we create CeeMat?
Intro. to Physical Computing–a flagship IDeATe course–regularly experiences a student-drop rate of ~40% or more. To address this, we prototyped a system, CeeMat, that introduces three novel features:
Moreover, CeeMat moves us towards a more complete theory of seeing spaces and introduces a novel interpretation of hybrid environments where digital and physical components exist as a single, continuous material.
Coverage
We presented CeeMat during the School of Design's Fall 2016 #CMUDesignWeek at Carnegie Mellon University. The School of Design featured us on their Instagram and IDeATe did so on Twitter.
Process
Visual overview of the process behind researching and prototyping CeeMat.




















Full Process
These are actual process checkpoints completed while working on CeeMat throughout the class. (Click below to open any of the parts in a new tab.)
¬
Environments Studio I: Form & Context
Environments Studio I (51-265); taught by Prof. Peter Scupelli
Environments Design Lab I (51-267); taught by Prof. Austin S. Lee
This project was completed in collaboration with Jessica Nip and Lucas Ochoa for Project A3 in Environments Mini 1 over the duration of ~3 weeks.
December 2016
Column
Human-Interface device for control of Digital & Physical Environments
The Column is a human-interface device for control of digital and physical environments. It can manipulated by rolling or twisting and also allows for the movement of content between different digital surfaces (i.e. moving slides from a monitor to a projector) within the greater context of the physical environment.




Why did we create The Column?
Current human-interface input devices are highly specific and bounded by medium (i.e. mouse for computer use while seated, on-device dial/knob for fan/blinds/lights, remote-clicker for presenting slides [application specific standing-computer-use], un-situated on-wall control of heating/cooling, etc).As IoT (Internet of Things) devices become more integrated into our lives and the divide between digital and physical environments continues to blur, we believe new forms of control that better allow transition-ing between environments and use-cases will be necessary.
The Column is our first stab at a device that sketches potential interactions in two important and evolving spaces: non-specific control of physical and digital environments and movement of content between environments.

Design Process
For our second project, we were tasked with creating prototypes to test our ideas about environments. Specifically, we were asked to design and prototype interactions that would move us towards the the studio of the future.



As we continued to prototype, one inspiration was particularly critical: the Sam Flynn vs. Rinzler battle from Tron Legacy. During the battle, Rinzler is controlled by Clue with the upmost elegance and precision. The human-interface Clue uses is simply a pair of orbs which he swirls around in his hand. This is in contrast, for example, to the body-suit interfaces Jaeger pilots use in Pacific Rim, which are highly specific (i.e. 1:1 body interactions create similar interactions by the robot). We were extremely intrigued by the magic of Clue's interface with Rinzler—the idea that something complex (Rinzler's actions or even directions for Rinzler to carry out) could be controlled by something as simple as two orbs is incredible. The concept that one would want to fully embody their desired output in the input (the interface) is tiresome, the magic of Clue's interface is that he only has to embody very little and a great deal of output can be achieved—now that's a truly futuristic interface.





Final Critique
People reacted positively to the presentation of The Column during our final crit. Some even posted snapchat stories — see video below. One of our Professors, Austin Lee, can be seen providing verbal feedback in the background.Note: the version of column presented here is a working concept for a system that is greater than what our working prototype currently enables. This project was completed in collaboration with Lucas Ochoa for Project A2 in E-Mini I, and began in collaboration with Adella Guo in Interactivity & Computation for Creative Practice for Project 09.
December 2016
P:hys:IXEL
volumetric physical pixel display
P:hys:IXEL – derived from Pixel + Physical – is a concept volumetric display made of physical origami pixels.
KAIST Deptartment of Industrial Design
Media Interaction Design F17
Professor Woohun Lee
By Lucas Ochoa, Gautam Bose, Cameron Burgess
P:hys:IXEL is a vignette into that world where where the digital environment isn’t stuck behind two dimensional screens, but one where it feels analogue and tangible. A future where dynamic information and interactivity aren’t locked behind shiny backlit RGB screens or fired directly into our retinas (as with VR/AR/MR headsets). But, one where computational information animates the timeless materials (like paper) that we use everyday.
The intention of our project was to explore the notion of a dynamic material that would allow rapid display of three dimensional media. We ended up creating a physical pixel that can occupy more or less physical space depending on its analog state — from collapsed cylinder structure to semi-spherical orb. Moreover, each physical pixel module uses a custom actuation mechanism that enables fast responsiveness with a low actuation-force requirement; this also means minimal stress is placed on the tensile origami structure.





The Story

Idea Conception
We’re constantly interacting with 2D light-emitting information displays — they’re all around us in our built environment as signage, advertisements, and user interfaces. We carry them around in our pockets and use them to connect with one another. These displays afford remarkable flexibility in terms of information specific interactions. They provide very high resolution, a massive color space, and responsive canvas for the graphics they display. However, no matter how crisp and vibrant these displays get, they completely lack an aspect of interaction which is fundamentally human, the way we observe our environment: definition in 3D space.
To answer this disparity, we set forth to create a new kind ‘dynamic material’ that would afford the rapid display of information without the need for emitting light. Our intention was to create a grid of physical pixels that had the key characteristics of occupying space in their “on” position and hiding away in their “off” position.
Exploring Space Changing Origami
The physical pixel was inspired by work such as the transfixing actuated-pin display inForm and exciting conceptual frameworks about the future of materials like Hiroshi Ishii’s Radical Atoms. We wanted to create some sort physical real world matter that could be controlled and morphed in a digital — in essence, a physical pixel. Ultimately, this idea also became the inspiration for our project’s name: P:hys:IXEL.
The first decision we made when constructing our physical pixel was the modality that it would use to indicate the ‘on’ and ‘off’ states and the range in between. There are many modalities to choose from, including size, form, position, color/surface treatment, etc. ‘inForm’ chooses to do this through altering the position of each pixel.
Many structures can vary their scale. Consider the umbrella: much of the time it only takes up the space of a thin cylinder. However, it can expand to occupy the space needed to protect someone from rain. This umbrella mechanism served as an initial source of inspiration for the design of each physical pixel.
Umbrellas require moving a small the distance of the pole in order to fully expand the umbrella, which was an actuation distance of a few feet. This scale was too large to work with initially, so we decided to prototype with smaller, cheaper drink umbrellas. This mechanism revealed many challenges that come from the changing amount of force required to move the runner as it gets to the ends of its range.
Throughout our exploration of drink umbrellas, paper was our primary medium of exploration. Its simplicity and timeless nature worked with our goals of creating a natural looking project compared with other, less intuitive materials such as cellophane. Through these explorations in paper, we discovered the craft of origami.
Specifically, we looked at Yuri Shumakov’s magic ball design as a reference when experimenting with many different structures. These structures center around a repeating fold, which slowly wraps the paper into a small cylinder. However, this cylinder can also be expanded by shortening the distance from one end of the cylinder to the other, causing the paper folds to expand out, revealing more surface area and volume that was hidden.
We enjoyed the motion of this paper structure, and felt that it struck the right balance of replicability, pleasing motion, and durability. Though it was arduous to fold these hundreds of folds by hand, we were able to speed up the process by laser scoring important folds beforehand. With this structure in mind, we moved on to designing our actuation methodology.
Designing an Actuation Method
Throughout the entire process we assessed the different design concepts that we were exploring with an acrylic test rig. The test rig was designed to only hold one or two units for testing with servo mounts at the top for exploration of automated actuation. This method of prototyping allowed us to test proposed mechanisms to see if they were visually and aesthetically viable for scaling to a grid of 3D physical pixels. By creating this test rig we effectively built the framework to begin searching for our golden standard 3D pixel module.
A guided prototyping process using this test rig allowed us to develop what became our final actuation method. Essentially, we would create a fixed inner core and a sliding outer core. The outer core was hand sewn in to the origami folds we created, and slits were cut along the outer core so that it could expand and contract with the ball. This outer core was coupled to a SG90 servo via a small pushrod. These small servos were cheap, easily controlled, and provided sufficient torque to move the outer core, thus flexing the ball’s folds outwards.
This mechanism proved to be reliable while providing strong differentiation between the open and closed states. We were able to take this, now actuated origami ball, and turn it into something that we could replicate to create our display.
Designing a Modular Pixel
Even though we had workable actuation and display through the origami ball, the pixel wasn’t quite ready to become its own workable module. To do so, we used CAD to create a small acrylic holder for the inner core that structurally supported the pixel. This acrylic component also fit into our larger frame that held everything together, and contained mounting points for the SG90 servo. This small module was a kind of ‘gold standard’ pixel that we were working towards, having reliable actuation and the ability to be assembled as part of a larger structure.
System Architecture
After designing a kind of ‘gold standard’ pixel that was easily replicated, our next challenge became how we would be able to assemble them into a prototype of our display. This required that we design control methods as well as software that would help us turn our physical pixel into a functioning display.
Physically, we produced nine pixel modules and used CAD to design and fabricate a structure that would hold our physical pixels together as a display. The display has two main components, one is the frame that holds everything together, and the other is all the modular pixel units. The modular pixel units are fabricated from laser cut acrylic as well as the combination laser cut and manually folded paper and straw mechanism we developed earlier in the project. The base frame is made out of acrylic as well with a black felt wrap-around, to create a consistent backdrop for the white paper pixels.
On the software side, we tackled many challenges in our quest to efficiently and effectively control these nine pixels. Since each pixel is controlled by a single servo, we chose a single Arduino UNO as our primary control device. However, we wanted to create expressive ‘content’ do demonstrate the potential capabilities of our display, something that the limited Arduino programming environment would not easily allow.
Instead, we chose to build a control mechanism in Processing, a creative art/coding tool kit. This allowed us to create expressive animations that were then sent to the Arduino over Serial USB. This decision aided our rapid prototyping process greatly. For example, initial simplistic programming that had the servos to go from one position to another directly did not result in the natural motion that we wanted. Instead we had to use sine waves to create smooth transitions from one position to the next, and implementing this in Processing offered significantly more control. This integration of a sine wave into the way that our pixels were actuated provided a motion that was reminiscent of a human chest rising and falling with breath.
Hypothesizing the Future
Perceiving the digital world today is an exercise looking at emitted light: screens, blinking LEDs, projectors, or even having lasers fired directly into our eyes (like when wearing a Microsoft HoloLens or Magic Leap One). Our perception of digital experiences is so synonymous with light-emitting information displays that our vocabulary for talking about them is one and the same: ‘screen time’ refers to the time one spends using their mobile computer. The prescriptiveness of this near single-party system is hard to overstate — “Emitted Light” in all its varieties still looks a certain way, makes us feel a certain way, and has an overall congruent quality that flattens our mental models regarding what computational technology can be. Oftentimes reducing our perspective as designers to what can ‘computational technology using light-emitting displays (normally screens) be’?
This is a travesty beyond description, one where light-emitting-based displays are flattening the texture of how many people experience their (increasingly digital) world. As more tools and practices in life become mediated through computational tools, it’s paramount that that our digital interfaces have more ways of speaking to us than through emitted-light. P:hys:IXEL is an early sketch at how information might be displayed in a world with more diverse options for computational information display — one where sculptural forms with digital brains can shift, transform, and morph their state to communicate and display the ever more present data in our world
December 2017
‘Mushy Edges’ Spatial Selection Interface
Environments Data Capture
In the class Sensing Environments (51-377) taught by Mitch Sipus, we became familiar with and identified shortcomings in “Environment Mapping” software — both professional tools like QGIS and Fulcrum and consumer offerings like Google My Maps. Following this research period, we entered a short design sprint where we had to zone in on a particular shortcoming and design a solution.
Table of Contents
Research


Mobile environment data collection application


point-based spatial selection UI shown

Masking and feathering as a strategy for denoting a 'blurry' edge
Context
![My Research Question: Night-Time Illumination on Carnegie Mellon’s Campus: What spaces are illuminated? How brightly? Warm or cool light?]()




Design
Algorithmic Underpinnings
“Metaballs are, in computer graphics, organic-looking n-dimensional objects” —Quote from Wikipedia

October 2018
Interfaces for Computational Paper
Pursuing the Paper of Tomorrow
Table of Contents
Gestural Drawing Prototype
This working-prototype explores a gestural radial menu within the context of a drawing application. It was inspried (partially) by the irritation of constantly tool-switching in Paper by 53 and also by some of the reasons further elaborated in my research section.
Menu options in Version 1
- Undo
- Clear All
- Canvas Colour Grey Scale
- Pen Colour Hue
- Pen Colour Saturation
- Pen Colour Brightness
Glider: Tangible Drawing Interface
This project was built for CMU's annual Build18 Hackathon in January 2018.


Software Design & Development
Prior Art
November 2017 — Present
Authoring Environments Collection
Maintained by Cameron Burgess cburgess@cmu.edu
Originally supported by the Dubberly Design Office Original Collection︎
This database captures hundreds of computer software and hardware interfaces for authoring data and programs from the 1960s through the present day.
Prototype Authoring Environments
Exploring the space of authoring environments by building them from scratch
As a part of my broader research and design interest in authoring environments (programming/design tools, environments for thought, etc) I’ve begun building prototype authoring environment to more intimately understand the subject matter.
Node-Based Photo Editor
Working prototype of a node-based photo editing application built in Python for Fundamentals of Programming and Computer Science (15-112). Conept and Design work completed in Environments Studio IV: Intelligence(s) in Environments (51-360) for the project “Where are the Humans in AI.” This work was presented at Data & Society’s annual Future Perfect conference in New York City from June 7-8, 2018.GIF Editor from Scratch
Photo editor built in Java for Software Prototyping (ID311) with Prof. Andrea Bianchi at KAIST ID in November 2017.Computational Plotting
Project 04 from 60-212, Fall 2017







Process
I kicked off my thought process for this project thinking about this font specimen and how I liked the seemingly arbitrary selection of numbers used. I wanted to create a system that would allow the continual generation of such values and also the spatial placement of such (typographic) numbers.

Technical Approach
From an engineering point of view, I used a dynamic-grid-column drawing system to define the regions of my canvas. Then I use those predefined rectangular shapes to write numbers inside of them using the
createFont()method, but importantly, instead of drawing the fonts to the canvas, I drew them to a JAVA2D PGraphics ‘offscreen’ canvas. I remixed some of the code for this from this github project. This means all of the numbers are being drawn in my custom font, GT Walsheim, directly onto an image object instead onto the primary canvas. The reason I do this is to allow for easy distortion and warping of the pixels and elements without having to convert text to outlines and deal with bezier curves.

The next technical question was how do I get my design back out of raster format into ‘vector/object’ format, so I can use an exported PDF with the AkiDraw device? I used a method for scanning the pixels of the raster with the
get()method, then I’m able to ‘etch the drawing’ back out of pixels and place objects that will export in the PDF where the colour values register in certain ranges.
Code & Technologies
– Evil Mad Scientist Axidraw– Processing

Note: This project, completed for 60-212: Interactivity & Computation for Creative Practice with Prof. Golan Levin, is primarily an arts-engineering exercise, not a design project. See the original course-blog documentation for this project here.
September 2016
Slo-Motion Microscope Videography
Experimental Capture, Spring 2017
Process
In Environments Studio II (2017 Spring), a great deal of focus was was put on the study of scale and space. I decided to continue those preliminary studies in the art studio I was taking that semester (Experimental Capture) by expanding the area of research to include time/motion, in addition to scale.
All studies were completed using an Edgertronic slow-motion camera and Bausch & Lomb sterozoom microscope.



GIFs




Feedack from Class Critique
Slow Motion Microscope images – film: Zea (canadian film board). Abstract porn. Showing something we know in a way we have never known it. Time – Muybridge. What is the relationship between sound and experience. Think about audio track.
I think it would be very helpful to have captions on the videos that explain what we’re seeing: glitter dust, Febreeze bottle squirting, penny being frosted
Love the work! It was mesmerizing… Sorry I laughed–it was Charlie’s comment that made me laugh. The work itself is fascinating. It is very artistic. The everyday objects transformed into something very magical in your piece. It was awesome. Great job~ – A
Strong explanation and good background.
I’m unsure what you mean by “artistic approach”
I can’t believe how cohesive it felt. All of the experimentations were different objects but the video looked whole. It really has a specific feeling to it, very visceral. So interesting.
I love how the lens looks a bit dirty because the footage has a really vintage feeling.
^True, but clean that sensor lol.^^^ +
Really cool! I think you might want to take a stab at developing a microfocusing system. I can tell that the super shallow depth of field is hurting you
What would help would be to label what we’re looking at in the video, maybe a subtitle or something…?
I enjoy the freezing penny most…. Interesting to see that tiny frosting, the transition between states of matter, crazy surfaces
EDM visuals
Music video
Abstract porn
COLORS OMG ++
What unifies the subject matter beyond looking cool?++!
Where are the crickets <<< !!!!!!! ++++ +++++++++ Very beautiful in terms of pure visual exploration : O I have no idea what I’m looking at and I fucking love it You are incapable of telling what you are looking at, which adds to the piece tremendously Music could be better+++ or no music at all Really small things at a high speed. Nice approach! Sound design idea- contact mic your workspace, the camera shutter etc- capture high bit rate audio to go alongside your captures + Charlie white is wrong electronic music is feelings.+ Electronica is the default content-free capitalism sound-track It depends but I feel that is way more true of guitar-music Zea documentary?? “Curiosity, repulsion and arousal” - Charlie 2017
Source
Links
Twitter MomentCourse Blog Post
¬ Special Thanks to Steve Stadelmeier for lending me the B&L Microscope and advising me on its usage and to Golan Levin for his support in making this project possible at the STUDIO.
April 2017
Asana
Product Design Internship

Team Pages for Android
My primary project while at Asana was designing new team management and navigation functionality into Asana's Android application. This involved working with the interaction flows and related visual & pattern design decisions that make up the Asana mobile experience. But first, what are teams in Asana and how did this functionality already work on desktop? Watch the below video from the Asana Guide on Team Basics for a basic understanding.Design Territory Map
After understanding how Teams existed in the web version of Asana, I drew a map of the territory within the Asana system on Android where the team-based functionality would be introduced — this helped me grok the complexity and interrelated aspects of the Asana experience on Android.



















While designing team pages, I consulted our mobile user research and found, in summary, that users were sometimes confused about 'where in the application' they were. To address this, I prototyped multiple directions on how we could denote greater hierarchy and understanding around team pages, project pages, and the navigation between their task lists and conversations. Changes to the interaction design of Team Pages was ultimately scoped-out of the project.






Conclusion & Reflection
Reflecting back on my experience designing team pages, it was an incredibly difficult and messy design assignment to receive as an intern. To address the design problem I had to work across and introduce entry points throughout a large swath of Asana's mobile experience, which required knowledge of a vast amount of the system — just acquiring the necessary knowledge to properly design and then advocate for my design took the majority of my internship.Moreover, gaining the respect of my peers and learning how to adequately make arguments for changes that touched multiple parts of the Asana experience was harder still. Suffice to say, this wasn't a sandbox project with a clear slate, and explaining what I did while at Asana isn't the cleanest "A to B" story. However, learning how to get up to speed on a complex product with lots of stakeholders and decision history was an experience I wouldn't trade for anything.
My two biggest takeaways from working at Asana:
(1) "Ship Cupcakes," which derives from a conversation I had with my PM on how to get design out the door and into the product; the alternative to cupcakes being 'wedding cakes,' which are large and perfect, but never get built.(2) Large systems take time to learn and it takes more time to to learn how to design for them. It's something I'm better at now than ever before, but also something that I realize makes me an abnormal observer and user of a product — trying to put yourself back in shoes of a first time user is something I always try to do.

My work on team pages shipped in the middle of December 2016, you can read about it on the Asana blog.
Researching & Prototyping Future Features
Throughout my internship, I was occasionally involved with projects that fell outside my main work on Team Pages for Android. I worked on prototyping, design research, and exploration for Boards in Asana, Mobile Quick Add, and Calendar for Mobile.
Boards
Boards in Asana is a new feature that makes Asana more visual and usable for multi-stage work. Boards launched in November 2016, but I had the opportunity to co-design on the project with Paul Velleux during my time on the mobile team. One of the issues Boards faces on mobile, is the aspect ratio of the smartphone display: portrait. This doesn't map well to desktop, where the standard landscape orientation makes seeing multiple columns at once easy. Because of this, many issues arise, including the navigation between boards and movement of cards across boards.




Mobile Quick Add
Near the end of my internship, I had a brief moment to investigate the Quick Add button on Mobile Asana. It's the entry point for a lot of primary functionality (i.e. new tasks) and was going to be overhauled in the near future, I didn't get to finish this project, but I do have some niblets of early research and design.


Mobile Calendar
Before starting my primary work on Team Pages, I briefly got up to speed by researching the possibility of bringing desktop Asana's calendar view to mobile. One interesting quirk with this project was modeling a certain aspect of the desktop's design while working on ideation for the mobile version — that aspect was a gradual colour change in grey values used across the n number of days in any given month. To save time clicking in sketch, I whipped up a quick DrawBot application that would spit out all of the steps between the two greys for any number of slices.



Quick Actions for iOS
During the last week of my internship, I had the unique opportunity to write the spec for my own feature and work with Asana's engineers to ship Quick Actions for 3D Touch. The final spec included Quick Actions for New Task, New Task with Photo, Inbox, and Search.New Task with Photo, in particular, was a feature that I advocated hard for, using information from our mobile user research to reinforce the idea that Asana was missing opportunities to capture 'the seeds of work' on mobile and precedent from other applications, like Google Keep, to make the argument for its inclusion.
¬ not all features here shipped exactly as represented.
June 2016
Designing Wick Editor
A Timeline-based Media Creation Toolkit
Wick Editor is a scriptable timeline-based multimedia creation tool kit for making interactive animations, games, and other ideas. The project is headed by Luca Damasco and Zach Rispoli, and is funded in-part by The STUDIO for Creative Inquiry. In January 2017, I was hired to assist in a complete redesign of the project – in-progress work can be seen below.
Wick Editor is live at wickeditor.com. It's also an open-source project on GitHub.
Before

After![]()
## work-in-progress ##
Some of our early Design process is available on Dropbox Paper. Please forgive the lack of organization and description — this gallery is a mix of early concept maps, sketches, visual design, and prototypes, this work was done in conjunction with Gautam Bose.





chromebook asset on project tile from Greg Warner
Think Tank Team, Samsung Research America
Design and Prototyping Internship

During the summer of 2017, I interned at Samsung Research America on the Think Tank Team in Mountain View, California. The Think Tank Team (TTT for short) is an R&D lab composed of engineers, scientists, and designers developing future products, services, and experiences across all of Samsung's consumer electronics product categories.
During my time at TTT, I worked primarily in the interaction design, concept development, and 3D design spaces on candidate designs and concepts for future flagship Samsung smartphones.
Get in touch for more details.
I wrote about my summer at Samsung’s Think Tank Team on the CMU Design@Work blog here.
May 2017
Systematic Design Methods
The linked website below is a framework for understanding the similarities and differences between the methods looked at in Systems (51-172). The class was taught by Cameron Tonkinwise and Kakee Scott at the CMU School of Design in Spring 2016.
Graphic Design
Studies in Typography & Form
An understanding of graphic design is a through line in my design practice; below are selected studies in scale, type, grids, and hierarchy
Typeface Study: Taz
In Communications Studio I, taught by Prof. Dan Boyarski, every student was assigned a type family for analysis – I was worked with Taz, creating static and dynamic artifacts.





Grids Analysis: The New York Times Magazine
The first project in Communications Studio I, taught by Prof. Dan Boyarski, was the analysis of grids within a web or print publication. I worked in collaboration with Heidi Chung to analyze The New York Times Magazine. We created a Keynote presentation, which was showed in conjunction with a verbal presentation – it is presented here as a video with select slides available below for more careful review.






Hierarchy Study: Poster Series
In Communication Studio I, taught by Prof. Dan Boyarski, we explored the notion of hierarchy in design by creating a poster for a hypothetical lecture series.





Digital Microscopy: Final Cut Pro X
In Environments Studio II, the study of scale in the physical world was emphasized. As an extension of this study, we explored the meaning of scale in the digital environment by scaling a screenshot of Final Cut Pro X from its original size (4" diagonal) to 8' 7" diagonal – a scaling factor of 25X.


This study challenges the traditional notion that the digital environment is assembled of pixels by proposing that the digital environment exists purely and infinitely in the human mind and its 'pixelwise representation' is simply a mechanism of mechanical display. This provocation also points towards potential futures where interactive and dynamic content will manifest itself in manifold different materials beyond traditional (RGB-XY) displays.
This project was completed in collaboration with Gautam Bose. Photo documentation by Soonho Kwon. Click to download the sketch file below.

Ongoing
Carnegie Mellon Hyperloop
From September 2015 to February 2016, I served as the Lead Designer and Design Team Lead for Carnegie Mellon Hyperloop, the SpaceX Hyperloop Pod Competition team from CMU.
One of my primary projects was architecting the design and communication frameworks for our Competition Weekend engineering presentation at Texas A&M University.
Interdisciplinary Sprints
I began by spending long amounts of time talking with various Engineers on the team, working to understand the functions of the pod and how I could communicate them to various audiences.


Communication Frameworks
With a complicated system, such as the Hyperloop, establishing communication frameworks for consistently explaining common events and systems was essential. I collaborated with Ruolan Xia, the principle graphics designer on CMH Design, to create visual tiles for our communication frameworks.

Animation as Communication
As the pod moves through the tube, the currently operating sub-systems change. To show this visually, I animated the Systems and Events frameworks in unison.
Other Selected Slides
More slides from the presentation (not ordered) are below, for a full copy, contact me.




3D Renders by Gautam Bose
January 2016
Interfaces for Computational Paper
Pursuing the Paper of TomorrowThis page lists my ongoing research/investigations into interfaces for computational paper.
Research
Prototypes
Early protypes of a paper-notion of computing extend ideas of a hybird environment explored in CeeMat: Dynamic Seeing Tools for Learning, Understanding, and Prototyping Physical Computation.
¬
Studies in Interactive|Dynamic Materials
Independent Study, Fall 2017
Advisor: Prof. Andrea Bianchi
KAIST Dept. of Industrial Design
Some explorations completed in collaboration with Lucas Ochoa.
November 2017
PRIOR ART
Recent History in Consumer Tablet Computing
part of Pursuing the Paper of Tomorrow
Al Gore’s Our Choice Interactive Book by Push Pop Press
When the iPad was introduced, it was done so under the premise of being ‘in the middle’ betwen the smartphone and the computer, only justified in its existence by being better than both devices at a key set of things:
Photos & Video
Gamese
Books
Notably absent from the list are any categories of productive or creative work. This didn’t stop early app developers from trying to reorient the iPad for creative and productive work, the first major attempt at this vision was Paper by FiftyThree in 2012. But the trend has not abided as time has gone on, in-fact, it’s only accelerated as hardware makers and app developers look for ways to increase the value proposition of non-PC devices.
FiftyThree was famously founded by the team that had previously worked on the now infamous Courier tablet concept at Microsoft, leaked by Gizmodo in 2009.
By 2015, the idea of a more professional and creative tablet was verified by Apple’s introduction of the iPad Pro and the Apple Pencil. Earlier in the year, FiftyThree had updated Paper with new features that made it much easier to diagram and ‘think’ with their application, in addition to its traditional roots in drawing and expression.
Diagram
Select
Fill
Since the introdution of the original Surface Pro in 2013, Microsoft had also been pursuing pen-based input for its tablets as well and has only become more serious in the endevour in the last few years.
Recently, a startup named reMarkable introduced a new product centered around the idea of paper-like interaction being an inherently calmer and more productive medium for note-taking and drawing.
Where are we now?
Despite all of the activity in the space and the growing penetration of tablet devices, I believe we’re in a bizzarely odd state of current affairs with regard to tablet computing’s approach to pen-based input and its interaction design metaphors more generally.PRIOR ART
Review of Academic Research in Tablet Computing
part of Pursuing the Paper of TomorrowTouch tools Chris Harrison
http://www.chrisharrison.net/index.php/Research/Touchtools
Kun-Pyo
Dpi.kaist
2010… how users manipulate deform,fable disappear as input devices
https://dl.acm.org/citation.cfm?id=1753572
Ken Hinckley
Microsoft
https://www.youtube.com/watch?v=9sTgLYH8qWs
Pen writes touch manipulates
Dan Vogel
Conté: Multimodal Input Inspired by an Artist's Crayon
http://cognitivemedium.com/magic_paper/index.html
http://worrydream.com/MediaForThinkingTheUnthinkable/
RESEARCH
Contrasting tablet computing with its physical counterpart
part of Pursuing the Paper of TomorrowContemporary Tablet Computing implementations
[video coming]
Keynote (pending)
[keynote pen/and finder do same thing (why?)][shot of desk with paper and pencil doing different things]
[video coming]
iBooks (pending)
[show the fumbling between turning off pencil mode, trying to swipe between pages and covering the page in ink][show how the highlighter covers up the ink]
[show how with a real paper, it highlighters doesn’t cover words]
more case studies (and documentation) coming..
Desktop materials, hands, and modes of manipulation
more coming...
Explorations in Interactivity & Computational Methods
60-212, Fall 2016 | Prof. Golan Levin
Below are various projects completed for 60-212: Interactivity & Computation for Creative Practice, taught by Prof. Golan Levin in the Fall semester, 2016. These examples are presented here not to display any particular design process or acumen. But, to show a range of computational abilities I can employ when appropriate.
Drawing generative Plots with Processing




Additional Process Work | Code on GitHub
"Nice exploration of type as both glyphs and raster, exploiting the visual qualities of glitch and overprint to good effect." – Marius Watz
"Technically driven concept. Good documentation of process. Interesting composition. Two colors adds visual complexity." – Nick Hardeman
Project Feedback from Course Blog
Building Pinch-to-Zoom with Real-Time Motion Capture Data from Kinect V2
Additional Process Work | Code on GitHub
Software Pipeline for applying computational transformations to a live video feed

Additional Process Work | Code on GitHub
"A very solid technical and conceptual investigation. It reminds me of finger smudges left on phone screens." – Caroline Record
"Good concept, good use of chaining software." – Kyle McDonald
Project Feedback from Course Blog
Selecting and Transforming large data-sets for the development of interactive web applications






Additional Process Work | Code on GitHub
Cleaning & Preparing data for use in D3 blocks

Additional Process Work | Code on GitHub
October 2016
Building Pinch-to-Zoom with Motion Capture Data
Project 08 from 60-212, Fall 2017


The Backstory
When I was 12 years old, I visited the North American veterinary conference (NAVC) with my mother in Orlando, Florida. I was walking around the show floor with my mom when we decided to stop at the Bayer booth. In the middle of the booth was an original Microsoft Surface table — many people were congregating around to to see what it was all about. My mom and I played with it for awhile and then she left to enjoy the rest of the conference, but I stayed in the Bayer booth for easily 3 or 4 more hours becoming good friends with the booth attendants. I think it was the first highly responsive touch interface I’d ever used and it played on in my dreams for weeks. When I returned home, I tried to get my dad to buy one for our house, but at the time it was ~10-15K to install and you had to be a commercial partner…Reflection
Direct manipulation, Pinch to zoom, two-finger rotation, and other related interactions are fundamental principles of modern touch and gesture-based computing. Yet, understanding all of the underlying intricacies of these human-interfaces is difficult to do, even when directly analyzing them. But now, having implemented (parts) of them myself, I've gained a totally new understanding for the nuance of design needed to make these interactions feel as magical as they do. For example, seeing how multiple anchor-points (fingers/hands) affect scaling, movement, and selection is something I've never considered so consciously.
Code & Technologies
– Kinect SDK 2.0– KinectV2-OSC
– p5osc
– Processing

—
Note: This project, completed for 60-212: Interactivity & Computation for Creative Practice with Prof. Golan Levin, is primarily an arts-engineering exercise, not a design project. See the original documentation for project eight here: cambu-mocap
November 2016
——
Low Fidelity Prototype
To start, I quickly sketched out ideas on tabloid paper with pens and markers. Scanned documents and used Keynote build animations to bring my ideas to life, with minimal turnaround time.
In-Progress Slide Deck







Medium Fidelity Prototype
For the medium fidelity prototype, I continued to use Keynote for animation but used Sketch App to draw up assets instead of relying on paper sketching. I also added sound effects to help communicate how usage would proceed.
Final Resolution
For the final stage of our project, we scaled back the overall scope to that of only prompting for citations, instead of prompting and enabling quick citing. We built the product into a fully functioning Chrome Extension — it's available on the Chrome Web Store.
select slides from final presentation below













Test out Fastback
- Install Fastback from the Chrome Web Store
- Go to a pull request on Github.com (example for testing)
- Leave some line comments, see how Fastback changes your citation behavior.

Short Paper
We wrote up the details of our extension design, research design, research findings, and future directions in a short academic style conclusion paper.

Our investigation focused on the public working spaces within the Cohen University Center at Carnegie Mellon University. My group partners were Faith Kim, Marisa Lu, and Kyle Lee.





















—
Paper #18![]()
Tutorial Dialogue as Adaptive Collaborative Learning Support
Paper #15
![]()
Mudslide: A Spatially Anchored Census of Student Confusion for Online Lecture Videos
Paper #15
Photo Design I & II
Freshman Spring at Carnegie Mellon, students within the School of Design take two photography classes: Intro to Photo Design (51-132; Prof. Dylan Vitone) and Photo Design II (51-134; Prof. Charlee Brodsky). Select photos from the courses are below.
Humans of Design










Humans of CES 2016








Door Signs of Carnegie Mellon



















































Mellon Green










Textures of Reese





——
Metaphor-Based Communication
Presentation Slide Deck










