P:hys:IXEL
volumetric physical pixel display
P:hys:IXEL – derived from Pixel + Physical – is a concept volumetric display made of physical origami pixels.
KAIST Deptartment of Industrial Design
Media Interaction Design F17
Professor Woohun Lee
By Lucas Ochoa, Gautam Bose, Cameron Burgess
P:hys:IXEL is a vignette into that world where where the digital environment isn’t stuck behind two dimensional screens, but one where it feels analogue and tangible. A future where dynamic information and interactivity aren’t locked behind shiny backlit RGB screens or fired directly into our retinas (as with VR/AR/MR headsets). But, one where computational information animates the timeless materials (like paper) that we use everyday.
The intention of our project was to explore the notion of a dynamic material that would allow rapid display of three dimensional media. We ended up creating a physical pixel that can occupy more or less physical space depending on its analog state — from collapsed cylinder structure to semi-spherical orb. Moreover, each physical pixel module uses a custom actuation mechanism that enables fast responsiveness with a low actuation-force requirement; this also means minimal stress is placed on the tensile origami structure.





The Story

Idea Conception
We’re constantly interacting with 2D light-emitting information displays — they’re all around us in our built environment as signage, advertisements, and user interfaces. We carry them around in our pockets and use them to connect with one another. These displays afford remarkable flexibility in terms of information specific interactions. They provide very high resolution, a massive color space, and responsive canvas for the graphics they display. However, no matter how crisp and vibrant these displays get, they completely lack an aspect of interaction which is fundamentally human, the way we observe our environment: definition in 3D space.
To answer this disparity, we set forth to create a new kind ‘dynamic material’ that would afford the rapid display of information without the need for emitting light. Our intention was to create a grid of physical pixels that had the key characteristics of occupying space in their “on” position and hiding away in their “off” position.
Exploring Space Changing Origami
The physical pixel was inspired by work such as the transfixing actuated-pin display inForm and exciting conceptual frameworks about the future of materials like Hiroshi Ishii’s Radical Atoms. We wanted to create some sort physical real world matter that could be controlled and morphed in a digital — in essence, a physical pixel. Ultimately, this idea also became the inspiration for our project’s name: P:hys:IXEL.
The first decision we made when constructing our physical pixel was the modality that it would use to indicate the ‘on’ and ‘off’ states and the range in between. There are many modalities to choose from, including size, form, position, color/surface treatment, etc. ‘inForm’ chooses to do this through altering the position of each pixel.
Many structures can vary their scale. Consider the umbrella: much of the time it only takes up the space of a thin cylinder. However, it can expand to occupy the space needed to protect someone from rain. This umbrella mechanism served as an initial source of inspiration for the design of each physical pixel.
Umbrellas require moving a small the distance of the pole in order to fully expand the umbrella, which was an actuation distance of a few feet. This scale was too large to work with initially, so we decided to prototype with smaller, cheaper drink umbrellas. This mechanism revealed many challenges that come from the changing amount of force required to move the runner as it gets to the ends of its range.
Throughout our exploration of drink umbrellas, paper was our primary medium of exploration. Its simplicity and timeless nature worked with our goals of creating a natural looking project compared with other, less intuitive materials such as cellophane. Through these explorations in paper, we discovered the craft of origami.
Specifically, we looked at Yuri Shumakov’s magic ball design as a reference when experimenting with many different structures. These structures center around a repeating fold, which slowly wraps the paper into a small cylinder. However, this cylinder can also be expanded by shortening the distance from one end of the cylinder to the other, causing the paper folds to expand out, revealing more surface area and volume that was hidden.
We enjoyed the motion of this paper structure, and felt that it struck the right balance of replicability, pleasing motion, and durability. Though it was arduous to fold these hundreds of folds by hand, we were able to speed up the process by laser scoring important folds beforehand. With this structure in mind, we moved on to designing our actuation methodology.
Designing an Actuation Method
Throughout the entire process we assessed the different design concepts that we were exploring with an acrylic test rig. The test rig was designed to only hold one or two units for testing with servo mounts at the top for exploration of automated actuation. This method of prototyping allowed us to test proposed mechanisms to see if they were visually and aesthetically viable for scaling to a grid of 3D physical pixels. By creating this test rig we effectively built the framework to begin searching for our golden standard 3D pixel module.
A guided prototyping process using this test rig allowed us to develop what became our final actuation method. Essentially, we would create a fixed inner core and a sliding outer core. The outer core was hand sewn in to the origami folds we created, and slits were cut along the outer core so that it could expand and contract with the ball. This outer core was coupled to a SG90 servo via a small pushrod. These small servos were cheap, easily controlled, and provided sufficient torque to move the outer core, thus flexing the ball’s folds outwards.
This mechanism proved to be reliable while providing strong differentiation between the open and closed states. We were able to take this, now actuated origami ball, and turn it into something that we could replicate to create our display.
Designing a Modular Pixel
Even though we had workable actuation and display through the origami ball, the pixel wasn’t quite ready to become its own workable module. To do so, we used CAD to create a small acrylic holder for the inner core that structurally supported the pixel. This acrylic component also fit into our larger frame that held everything together, and contained mounting points for the SG90 servo. This small module was a kind of ‘gold standard’ pixel that we were working towards, having reliable actuation and the ability to be assembled as part of a larger structure.
System Architecture
After designing a kind of ‘gold standard’ pixel that was easily replicated, our next challenge became how we would be able to assemble them into a prototype of our display. This required that we design control methods as well as software that would help us turn our physical pixel into a functioning display.
Physically, we produced nine pixel modules and used CAD to design and fabricate a structure that would hold our physical pixels together as a display. The display has two main components, one is the frame that holds everything together, and the other is all the modular pixel units. The modular pixel units are fabricated from laser cut acrylic as well as the combination laser cut and manually folded paper and straw mechanism we developed earlier in the project. The base frame is made out of acrylic as well with a black felt wrap-around, to create a consistent backdrop for the white paper pixels.
On the software side, we tackled many challenges in our quest to efficiently and effectively control these nine pixels. Since each pixel is controlled by a single servo, we chose a single Arduino UNO as our primary control device. However, we wanted to create expressive ‘content’ do demonstrate the potential capabilities of our display, something that the limited Arduino programming environment would not easily allow.
Instead, we chose to build a control mechanism in Processing, a creative art/coding tool kit. This allowed us to create expressive animations that were then sent to the Arduino over Serial USB. This decision aided our rapid prototyping process greatly. For example, initial simplistic programming that had the servos to go from one position to another directly did not result in the natural motion that we wanted. Instead we had to use sine waves to create smooth transitions from one position to the next, and implementing this in Processing offered significantly more control. This integration of a sine wave into the way that our pixels were actuated provided a motion that was reminiscent of a human chest rising and falling with breath.
Hypothesizing the Future
Perceiving the digital world today is an exercise looking at emitted light: screens, blinking LEDs, projectors, or even having lasers fired directly into our eyes (like when wearing a Microsoft HoloLens or Magic Leap One). Our perception of digital experiences is so synonymous with light-emitting information displays that our vocabulary for talking about them is one and the same: ‘screen time’ refers to the time one spends using their mobile computer. The prescriptiveness of this near single-party system is hard to overstate — “Emitted Light” in all its varieties still looks a certain way, makes us feel a certain way, and has an overall congruent quality that flattens our mental models regarding what computational technology can be. Oftentimes reducing our perspective as designers to what can ‘computational technology using light-emitting displays (normally screens) be’?
This is a travesty beyond description, one where light-emitting-based displays are flattening the texture of how many people experience their (increasingly digital) world. As more tools and practices in life become mediated through computational tools, it’s paramount that that our digital interfaces have more ways of speaking to us than through emitted-light. P:hys:IXEL is an early sketch at how information might be displayed in a world with more diverse options for computational information display — one where sculptural forms with digital brains can shift, transform, and morph their state to communicate and display the ever more present data in our world
December 2017