Emerging Technologies Production/Portfolio

Emerging Technologies – Research Proposal

Research Overview

For this project, I plan to create a linear VR Art Gallery experience using a mixture of Maya and it’s MASH networks as well as the Openbrush software, making a 2D and 3D mixed environment for players to explore in.

In terms of the genre of the project, this will be a mix between action, horror and dark fantasy – inspired by the Shonen manga, Chainsaw man. My main objective with this project, is to design and develop a short VR experience that would both create tension in the open environment it’s set in and excite the user with suspenseful action sequences which will also serve as the main guide to each section of the narrative.

Before I started the project planning, I needed to research VR horror as a genre and how they handled different scripted sequences. This is also mainly so I could apply ethical considerations within the UX design.

For VR Horror, a lot of experiences use jumpscares to get a short term thrill out of the player. This is because, whilst jumpscares are impactful in other mediums such as traditional video games, manga and film – it’s especially effective especially when the threat is directly in front of the player’s perspective.

Reading an interview from the creator of Here They Lie, Cory Davis – he states: “I do believe that it’s good to have a reminder that this is a very extreme experience. [We’re] still in the infancy of what we’re going to learn in terms of what these experiences can do.” , mentioning that marketing the game’s experience honestly is the best ethical way set up the player’s expectations, especially with an experience that plays on the user’s emotive ‘fight or flight’ responses. This prevents deception and exploitation on player’s vulnerable senses.

Hayden, S. (2018). PSVR Horror Adventure ‘Here They Lie’ Coming to PS Plus Members for Free This Month. [Image] Road to VR. Available at: [Accessed 11/11/2023].

I want to also apply this advice onto visuals as well – therefore I’ll refrain from adding in interactable weapons or excessive gore or mutilation. This is because, even if my project is not entirely focused on the horror elements, shock value is seen as a cheap way to create suspense from a narrative standpoint but the emotive response, disgust is the last thing a user should experience. These elements could also exploit player’s visual senses which goes the ethical considerations guides.

In the interview, he also mentions removing player agency within the experience. Not only does this amplify the horror mechanics effectiveness but, from an ethical perspective – it prevents users from being in a violent setting as it was never the game’s intention. Of course, still allowing the player to move at their own pace as a fundamental to VR UX design.

I’ve also explored different UX principles within VR design and how to apply them within a horror game. With these main principals, I’ll discuss what I how I’ll apply them within the broader scale of the project:

Ergonomics – This mainly includes physical, visual and auditory accessibility. As mentioned later on, Visual and auditory queues will be applied to not only guide players to important areas but to also allow accessibility for those with different impairments especially in a VR space.

The VR experience will also be linear, allowing players to move from different locations within their own pace. This is because, due to user feedback, I’ve found that VR experiences that have players automatically move could cause sensory overload or motion sickness as their mindsets struggle to comprehend the reality, ultimately restricting player movement and causing the player to feel claustrophobic within their own space.

By considering interactivity, I plan to have the player to move by using key areas to traverse. This also ensures player’s safety especially in smaller areas. This also means that players that may have physical impairments can comfortably explore the game without the need for excessive movement.

Range of User’s View – Because I plan for the project to be a linear experience, the range for the user’s view will be 200 degrees, allowing for players to observe their surroundings fully whilst preventing them from getting lost or disorientated when trying to search for scripted events or exploring the environment.

Avoiding sudden elements and motion sickness – I’ve mentioned these elements previously when discussing the ergonomics considerations but I want to keep the visual style consistent throughout the experience, only adding different imagery in the background. This helps prevent discomfort when players explore the environment around them. I’ll also keep the color scheme simple – going for a more monochromatic palette to reflect the manga art style whilst improving accessibility for those with color blindness.

With the sudden elements, since this project is not centered around horror and more action based sequences, once again I’ll allow player movement from one scene to another to make sure players have enough breathing space between each sequence, allowing them to move through the experience in their own pace. In addition to this however, there’ll be a build up in audio when sound effects plays, giving the player a chance to anticipate the sequence without causing emotional distress.

User Interactivity – Another potential design flaw that could occur within VR development is a player missing out on a significant events or cutscenes – therefore subtle visual guides such as lighting or colour saturation or even audio stingers to help guide the player to the main objective. Other visual guides could also include arrows pointing the player to the primary focus, however I find this method to be immersion breaking and I find that, for accessibility, it’d be best to use visual and audio queues especially those for with audio or visual impairments.

From researching, “Here they Lie”, UX ethical design considerations and the overall visual style – I finally set up a set outline to how I wanted to approach the project in a technical sense. With this research, I also created set guidelines for myself to follow throughout the project’s development, which will help create a visually appealing action horror experience with user experience and immersion coming first within the design pipeline.

Project Plan

When I started my project planning, I created an mood-board to help establish what visual styles I wanted to explore within the experience. My main inspiration, as mentioned before, came from Chainsaw Man – with both the manga and anime. This is due to their different approaches when handling the dark fantasy nature of their mediums as well as the action scenes, creating dynamic visuals within each shot.

Emerging Technologies (

In this padlet, I researched Chainsaw Man’s manga style analysed the use of contrast in action scenes within the story’s environments. For example, In the manga, linear line work is used to for structures around the city, making the buildings appear lifeless, perfect and daunting. This helps structures to tower over the main character, making them appear smaller in the grand scheme of the narrative.

By applying this environment design within the experience, it helps keep that immersion around the user by making them feel small from a story standpoint. But, from a technical UX perspective, As a new VR user, I’d like to have an open space to adjust myself in and take a breather during high impact scenes. And as a VR art user, this also allows me to take time to explore and appreciate my surroundings and the artwork and the details that help contribute to the worldbuilding.

However, this use of linear line work also allows the contrast of dynamic and messy line art to be more effective when used in fight scenes, symbolizing the chaotic and destructive nature of the main character and his enemies every time they destroy a building or attack one another. I want to be able to replicate this style within my visual work and concept art in order to create unique visuals such as these to entice the wider anime market. This experience will serve as a homage – like an art gallery of different events.

By using a manga art style, this also means I can add in violent elements such as blood splatters without making scenes explicit, using ‘black ink’ to signify overflow and power. But, unlike in Chainsaw Man, this element will be limited to environmental storytelling rather than a tool for shock value or gore imagery due to not wanting to exploit people’s vulnerable senses for a thrill response.

For this project, I plan for the user to be a civilian watching the action happen from an outsider’s perspective – this way the user has time to piece together the narrative and it’s world building whilst making the experience less disorientating for players.

In terms of Narrative, I set up a basic story, inspired by the multiverse motif from the film: Spiderman Across The Spiderverse. The Motif, in summary, are worlds that are all connected based on Spiderman’s ‘events’. With this world, whilst different, The main character, a government field agent, is a hero – but not in the traditional sense, she’s just doing her job. Because of this narrative inspiration, there’ll be subtle nods throughout the experience that suggest she’s part of that multiverse.

Kamal, N. (2023). Oscar Isaac Threatens Miles Morales In The New Spider-Man Trailer. [Image] GIANT FREAKIN ROBOT. Available at: [Accessed 01/11/2023].

In short (More details on the narrative are in the padlet) – the story is set in a city divided into two sections – the Safe Zone and the ‘Red’ Zone. In the ‘Red’ Zone lies entities, inspired by eldritch beings, that cause destruction and chaos to the city. These entities are then captured by a goverment led organisation as the E.C.U.

Within this experience, the user will play as a civilian accidentally stumbling into the ‘Red’ Zone and having to be rescued by an government officer. The player will see different art of entities roaming around, destroying dilapidated buildings, getting into fights with the main government officer or even stalking the user throughout the experience.

As a 2D artist, I want to be able to see artwork, whether it be 2D or 3D, utilize the VR space. This is so the artwork appears more dynamic even if the piece was created primarily within a 2D canvas.

Therefore, the way I want to handle animating action sequences within the project is a similar approach to the Chainsaw Man manga PV’s – PV’s are essentially promotional videos created to announce and market a new manga volume to the public.

The way Shonen Jump presents this is by animating and showcasing certain panels to show readers what they’re prepared for as well as the dynamic art style and action based scenes that come along with the narrative.

Scenes are fast paced whilst movements are slow to capture that essence of destruction and chaos seen in Chainsaw Man’s world, A visual animation style I’d like to replicate within my VR experience. This animation technique not only efficiently uses timing to create suspense but also allow action scenes

Athena. (2023). Chainsaw Man Vol 12 PV. [online] Available at: [Accessed 01/11/2023].

In terms of the scope of the project, since this will be a very creative heavy experience, a lot of the work will be done to create the world to replicate that manga style, using A 2D style within a 3D environment. Since I’m confident with using Maya and my proficiency in 2D art, the project’s scope isn’t incredibly overwhelming since I’ll be using techniques and software that I’m already familiar with, experimenting with elements such as the MASH networks beforehand.

However, As an artist, I want to explore with different software so I can experiment with multiple mediums in a 3D space. So for example, OpenBrush experimentation to possibly create the different designs for the entities. This way, the effectiveness of scenes such as entities looming over the player or stalking them throughout the environment will help amplify that fear of being watched.

For my Project Plan, I created different tasks for my HackinPlan based on the level of importance during this stage of development. This is also so I can estimate how much time needs to be spent within each task and time management in regards to player orientated movement or larger milestones such as being able to finish the 3D blockout or the 2D assets for the experience. HackNplan serves as a tool to also log in my hours personally in order to keep in track how long I spend on each task.

Milestones within this project are set around not only my work produced as an artist but also implementing the UX design considerations and guidelines I have mentioned within my research overview. Because, this project relies on creating a comfortable and accessible experience for all users. This includes researching VR cameras, playtesting and finalising the Maya Environment as well as finishing concept planning.

With this gannt chart, however, it gives me a better understanding on my set timeline of tasks – primarily based off of how many weeks it’ll take as an estimation. Using the hours in my HackNplan, I was able to create a simple gannt chart, including soft deadlines for each major milestone. By adding soft milestones, this keeps my workflow consistent throughout the project and allows me to manage my time efficiently with each task.

However, there are less soft deadlines than a longer project due to the stricter time constraints between the development stage and the final play testing stage. This gannt chart presents the workflow that has been followed throughout the project’s development.

Further VR Research – Artistic Immersion

Whilst looking into visual styles for my 360 video prototype, I began to also research by exploring different VR art showcases to gain an understanding on the visual and experience expectations and design elements that help contribute towards player accessibility and narrative flow. This mainly helped me understand art showcases in general and how to present my art in a effective and aesthetically interesting manner.

For this exploration, I started off by researching and analyzing Sutu’s artwork, presented through different VR experiences. These settings are mainly based off of futuristic cyberpunk aesthetics, being inspired by films such as Heart of Darkness or Ready Player One. They’re very abstract art pieces being created using openbrush and they showcase different worlds using bright neon lighting paired with a dark background. The player moves through these futuristic worlds, often being greeted with glitch-art – an artform of vaporwave aesthetics and a overwhelming but visually active world with details amongst their environments.

Whilst I may not be going for an abstract style, Sutu’s VR art showcases also gives me the option to automatically move the camera through the experience I want to create, wanting to explore the “Droste effect” Sutu’s art galleries present. The art showcases are mainly visual presentations which means a lack of interactivity or player agency for users, but the main experience makes up for it by presenting narratives through Sutu’s art; helping me understand how to apply visual story telling into my work.

The Curious Alice VR experience (Made for the V&A art museum), however, allows interactivity, the art style being a lot more akin to a storybook style. This style is incredibly helpful for the similar manga style I wanted to go for, especially with how they managed perspective for that pop up book style, with the mixture between 2D and 3D art. This mixture helped inspire me to use this technique when building up the environment, wanting to use manga line work to emphasise the structures rough shapes or adding shadows to the debris surrounding the environment, keeping the setting consistent as well as making the difference between foreground and background more recognizable, enhancing player navigation.

Concept Storyboard

Finally, I created a set of concept art to help determine a consistent 2D art style when making the art pieces for the experience. Here, I’m playing around with monochromatic shading techniques and linework for different posters that will appear in different scenes, including alternative versions depending on which section the user is in.

Here are some a couple of shots of a potential storysphere I’ve made: With this – it allows me to see the entire environment from a 3D perspective, allowing it to become easy reference for blocking out my city landscape.

And here’s the full storyboard for the project, since the experience will only be 3 minutes long, there will be 5 action sequences set up as scripted events. These action sequences will be created using MASH networks, inspired by my experimentation I’ve done prior to this research proposal and I’ll experiment with having a VR Camera that the player could control or a Camera that automatically moves through each scene, similar to Sutu’s approach.

Emerging Technologies Lab Exercises Year 3

Emerging Technology – VR Immersive Art

VR Art

As an artist, VR Art created an incredible opportunity when it came to designing my art work within a 3D space. Whilst Maya is more of a modelling software, Openbrush is a collaborative tool designed with both 2D and 3D art in mind.

I experimented with Openbrush by creating a small diorama, using different paint brushes and techniques to create a 2.5D experience. Due to technical difficulties regarding the VR screenshots, I wasn’t able to document my process in a traditional sense. Therefore, I’ll describe my workflow.

To start off with, I used different thick and oil brushes to map out the whirlpool at the bottom of the diorama, using dark and light blues to add shadows and highlights. I made a tornade-esque spiral pattern to help map out the area I wanted to work in. Despite this area being small, it means that I could contain this diorama into a small section and could expand on it’s design / world by adding more in the future.

For now, I continued this spiral visual motif, moving up to the top. The process for this was inspired by spruce trees with the lines serving as the main focus of the diorama. To imagine this from a user’s perspective, the user would start from the bottom of the diorama and work their way to the top, being a more linear approach when handling the VR space.

I continued to add darker shades of colour to add depth to the piece, as well as adding different highlights of complimentary hues such as purple or navy. The bright colours contrast with the dark background help amplify these cooler but vibrant colour schemes.

As shown in the screenshots, the main theme of this diorama is the ocean. But I also wanted to add further colour contrast to the piece. Therefore I added jellyfish into the scene to further guide the user through the diorama.

One thing that I found when creating these jellyfish was the issue of the lack of depth – often being as flat as paper. I was able to combat this issue, however, by adding movement to each of the jellyfish, making them curve around diorama using the highlighter tool.

Once I finally finished adding in the jellyfish, I experimented with extra effects and brushes such as the flower brush and the star effects to further entice the user around the diorama, giving the environment some light whilst also working with 3D Meshes.

In conclusion, Openbrush was my favourite software to experiment with, not only for it’s artistic capabilities but also working within a 3D VR space for the first time. It allowed me to naturally produce art whilst being able to adjust it in Maya later on. It also allowed to me to freely explore my artistic capabilities without being restricted to a singular canvas and with the effects, it only enhances that experience, being able to create beautiful pieces.

In terms of concept art and storyboarding, Openbrush is the best option for me to use purely because I’m able to work in a 3D artistic space, making sure that the environments I create are user friendly and can be accessible to a wide range of playtesters.

As previously mentioned, storyboarding in Openbrush is also beneficial due to it’s lack of restrictions especially when I jumped from a 2D canvas initially.

This also means it’s easier to create 360 experiences, when working within the player’s perspective. Therefore for my larger project (Especially as a manga artist), I’d like to use this alongside MASH networks to create scripted events – these two mediums being mixed together would allow me to make a 2D and 3D world, the main hook for my project’s storytelling and environment.

Emerging Technologies Lab Exercises Year 3

Emerging Technology – Immersive User Experience (UX) and Augmented Reality (AR) 

With Augmented Reality, Whilst it may not directly contribute to my larger VR project, the medium was still interesting to explore as I’ve mainly experienced the format through mobile games such as FNAF AR or Pokemon GO with these games utilizing real world environments to create interesting set ups and mechanics that make these characters interact with the real world around them.

During my research into the capabilities of AR, I also noticed that AR could be used to advertise products or contacts such as the use of QR codes. QR codes, in our modern society, has been essential due to it’s accessible nature especially for users with visual impairments. With AR, by scanning a QR code, users could be greeted with a small animation alongside someone’s contact details to add uniqueness to any contact card or promotion.

In terms of games, as mentioned previously, AR can also be used to virtually interact with a real life environment. For instance, bringing art layers to life or having characters or different animations interact with the world around the user. AR is used primarily as an artistic medium in this case.

Creating AR using Zapworks and Unity

During this experimentation, I worked with Unity to create a small AR product that would allow users to scan a QR code so that when they used their camera over the image, it would pop up within that Unity environment

So I started off by setting up the Zapworks assets and plugins in Unity, adding in an image scanner, a rear facing camera and an image tracking target. For the image tracking target, it helps Zapworks identify the object when scanned, allowing the image to rotate or move alongside the user’s camera movements

Once I set up the environment, I adjusted the AR camera so that it’ll be able to scan and identify the image during the training process and added in an example image into the scene.

Sony Pictures Animation (2023). Spider-Man animator gives that one tip that might make all the difference to budding animatorsABC News. [Image] 9 Jun. Available at: [Accessed on 30/10/2023]

I also adjusted the workspace so the image scanner would be able to easily identify the AR trigger alongside adding the image as a tracking target.

Here is a video of the first part of the AR experimentation, This was just a test to help understand and experiment with Unity and Zapworks’s functions and capabilities. However, I wanted to add something further into the experimental AR project: adding in a 3D object that can interact alongside the image target.

So I created a seperate object using the plane mesh and added an extra drawing as a material. Along with this, I added some triggers to the 3D object so that the mesh would pop up along the image.

Here, I added runtime events that would allow the art to only be visible if the user’s camera was hovering over the AR trigger.

Despite trying on multiple attempts, unfortunately the 3D popup for the AR experiment was unsuccessful, but I’d like to continue exploring and improving this concept within the near future.

Test attempt with trying to get the 3D pop up to appear, unfortunately this did not work.

With this medium, I could definitely use and experiment this to further my art similar to my research prior. I could also potentially use this as a way to also showcase different art pieces for my portfolio or for any commission work with employers within the near future or even create some art and showcase them using the animation format. Either way, this medium within emerging technology could help users experience art through an 3D virtual lense and in the future, I’d like to experiment with this medium by mixing different 2D and 3D elements, such as this exploration piece, in order to present my art work in a visually engaging manner.

Emerging Technologies Lab Exercises Year 3

Emerging Technologies – Prototyping Your Immersive Experience: Part 2

Maya – Mash Networks

For the second half of this exercise, I was experimenting with MASH networks to create abstract visuals that I could utilize with my future VR project.

To start off with, I used different MASH Network Nodes to understand the different shapes and patterns that could be made with different nodes being used.

I then started to work with different colours, using the colour node to adjust the colours to random hues and saturation. I also used seeds for each node to create different patterns. The most interesting patterns coming from the random node to simulate explosions. I also found that the random node would create unique distributions in their rotation which allowed colours and shapes to overlap with one another.

These visuals made the patterns gravity defying and aesthetically appealing to view. Although, within a VR perspective, these shapes and vibrant colours could potentially cause motion sickness especially in a bright environment.

Here, I continued to work with different colours, shapes and compositions, now going for a more linear approach within my work whilst still experimenting with different hues; again, wanting to experiment with more abstract imagery.

Here, I worked with perspective, creating illusions that can only be seen in certain angles. For example, the everlasting winding staircase. An illusion that could further impact on VR immersion, potentially being used as a way to question the experience’s true reality, much like the Alice In Wonderland VR experience.

However, some considerations to keep in mind when creating these illusions, were the user experience. Since users, especially newcomers to VR, will find illusions that involve questioning reality disorientating. Especially since VR is meant to transport you into a new reality essentially. So, if I were to use optical illusions within my project, it’s best to keep them at a minimum to avoid disorientating perspectives.

I also experimented with animation for the MASH nodes. Here, for example, I used the random node to increase the strength of the explosion as the animation progresses, adding at least 3 keyframes to add anticipation to the sequence.

This will be incredibly useful later on, especially when making various action sequences for my VR project due to it’s unique animation possibilities.

Wall Destruction Experiment

I continued to experiment with the MASH function, starting off with making a wall destruction sequence in order to further explore different possibilities for scripted action sequences.

Created using Replicator MASH and Grid Distribution as well as Transform MASH

To start off with, I created a single block, soften the edges to turn the block into a brick. Afterwards, I used the Replicator Mash and Grid Distribution nodes in order to shape the bricks into a simple wall structure with different patterns. I also added a colour node, using different shades and hues of pink and purple to make the bricks aesthetically pleasing.

Created a small box and attached a camera – Also added a Colour node to MASH network

Once I set up the brick wall, I created a small box cart using the extrude option as well as adding in a first person camera and parenting it to the cart. This is to test out the perspective once the box collides with the wall itself.

The use of camera work, especially in VR, needs to be tested regularly in order to immerse the player into the sequence. Therefore, I’ve also added a border, to keep the perspective centered on the main focus as well as adjusting the focal lense to make the wall appear closer than it actually is.

First Person Camera View

After adding in the camera, I created a curve in order to make the cart move in a linear direction, creating constraints between the curve and the cart in order to create this simple animation.

Top Down View of the distance between the Wall and the box cart – Added a custom curve.

Once I adjusted the camera, I added a signal node, parenting it to the cart itself. This node allows the cart to react with the wall collision sequence, making it so when the collision sphere is near the wall, the bricks will gravitate away from the signal sphere.

As shown in the MASH signal settings, I customised the way the bricks will react to the collision sphere, mainly adjusting the rotation and the position of each brick to create that warping wall effect.

Here are a couple of shots of this action sequence from different perspectives. As shown, in the first person perspective, there’s the illusion of the cart immediately hitting the wall.

However, in retrospective, whilst the cart does collide with the wall; due to the collision sphere’s range, the cart collides with the bricks too early. In addition to this, from a VR perspective, with the bricks directly hitting the camera, I can imagine this could briefly disorient and startle users during the sequence.

Nevertheless, this experimental task was incredibly beneficial when learning how to create scripted events by showing how objects and structures can react to different node collisions. Not only this, but since I’ll be working on a city environment, by learning the distribution and replicator nodes gave me a better understanding on how to create an efficient and consistent blockout for taller, repeating structures when I started to work on my VR project.

Finally here is an outsider perspective of the animation, with the sequence being varied based off of the size of the signal collider, once again, creating unique and bouncy visuals. These perspectives are also dependent on how much the user can see from the first person camera.

For example, there is more warping within this animation – however, the wall reacts before the cart could properly collide with the wall. However, this could be used as an opening portal to another section of an environment. This is also beneficial for 360 perspectives.
This example is more suited for a more linear VR experience, with the cart realistically crashing into the wall. However, there’s less of an unique impact when colliding with the wall.

Experimentation 3 – Portal and Audio MASH

For this experiment, I was interested in creating potential VR assets for my larger project. Being inspired by Across The Spiderverse Visuals, I decided to recreate the portal effect seen from the movie to hopefully add into my environments later on in development.

Gvozden, D. (2023). The Definitive List of ‘Spider-Man: Across the Spider-Verse’ Easter Eggs. [online] The Hollywood Reporter. Available at: [Accessed on 25/10/2023]

I started off by creating a simple shape for the portal, adding in coloured textures to each part of the model (including lighting and shadowed textures inside the hexagon) in order to emphasise a 3D effect that can be seen once the portal is animated.

Once I finished the model, I made a CV curve (much similar to the linear curve made in the Wall destruction experiment) and added a curve node onto the MASH network for the hexagon. Once I attached the curve onto the node, the vertical pattern was automatically created, moving downwards and creating an infinite loop based of the directional axis of the curve line itself.

After creating this sequence, I started experimenting with the random node in order to make the patterns feel more realistic – as if you’re going through a portal that’s constantly changing. I slightly adjusted the rotation values based on randomness and this was the result:

Next, with the same mesh, I wanted to explore the audio nodes that MASH had to offer – wanting to create a unique sequence that makes it as if the world exists around the player.

So, by using a song from the Spiderverse soundtrack, I used the spherical distribution node alongside a random node to once again adjust the mesh’s into different patterns and I added the audio node in.

Usually, the audio node is mainly used on simpler shapes such as spheres or squares so, at first, whenever the hexagon moved alongside the song’s wave length, it was distort and clip through the portal effect itself. I managed to fix this issue, however, by adjusting the lowering the strength of the song’s wavelength.

After adjusting the audio MASH network, I added in a simple black background to further add into the illusion, clipping the black image plane into the portal effect in order to make the loop appear more immersive when it started to move.

And then, I added a simple camera set up, adjusting the focal lense to focus on the portal’s warping effect as each hexagon passed by.

Camera and Scene set up

The portal effect worked in order to create the illusion of infintely gliding through. However, as shown with the audio network, because network is static, it breaks the illusion fairly quickly.

So, similar to my process for the portal effect, I created a CV curve in reference to the spherical distribution, allowing the portal effect to take place once again but this time with the audio synced up alongside the sequence.

This was the final result of this experiment, with different portals gliding the user through and around them. On one hand, the colours and composition help create the illusion of user movement, however, in a 360 environment, I’ll need to consider how the user will handle the claustrophobic space that the portal effect provides.

For my larger project, I would want to work more with MASH networks such as these – it has helped me gain a solid understanding on how to approach the VR project, wanting to work with more illusions such as portals or warped effects. MASH networks also allow me to experiment with world building and action sequences.

With UX design especially, I want the user to be able to experience these effects but possibly in a smaller and less space-intrusive scale for user accessibility. So for instance, if I were to add the portal effect into my project, I’ll need to keep the sequence short or adjust the camera to prevent motion sickness – which usually comes from experiencing VR in a claustrophobic sequence or constant moving objects.

Emerging Technologies Lab Exercises Year 3

Emerging Technology – Prototyping Your Immersive Experience

When it comes to new technology being introduced every year comes with new explorations and discoveries to be made especially when it comes to new heights in player experience. VR has been a medium I’ve always been interested in but never fully experienced, nor was it a subject I was incredibly knowledgable on besides the Oculus Rift, A VR headset that was released in 2013.

Regardless of this, In these experimental blog posts, I’ll be exploring different techniques and practices in order to grasp not only a better understanding on the potential possibilities of creating different environments but also learn the technical aspects such as UX design and player accessibility.

VR 360 Camera

To begin exploring VR, I started off by using Maya to understand the basics of experience creation. For this, I created a basic environment with different structural shapes in order to simulate the player’s height within the world.

I also added some basic sky dome lighting to the experience – giving the area more depth especially when rendering the scene. The simple composition of these structures helped create an a domain that towers over the user without making it too overwhelming on the senses.

The VR camera helps add that sense of immersion by allowing users to feel as if they’re in a much smaller position. Whilst, at first, this was difficult to create first time due to the proportion considerations; This exercise helped shape my basic understanding on environment building and assisted me to begin to research and apply one of the fundamentals within UX design in VR: Immersion.

Since the VR video won’t be able to work in the browser so please download the video for the full experience.

WebVR – Exploring FrameVR

FrameVR is a web based VR creator that allows users to create their own experiences and to allow other members to see your own spaces in both PC and VR. The website also allows other users to collaborate on your space. These spaces could facilitate anything from work spaces to personal projects or even blockouts for bigger VR projects, allowing users to test out the space in the earlier stages of the production timeline from both a VR and PC perspective.

This is incredibly helpful due to these spaces being accessible upon multiple device types, arguably being able to reach for a wider market and playtesters especially within the blockout phase of any project.

For FrameVR, I was mainly experimenting with the tools that the software provided by creating my own personal space that players could relax in within both PC and VR.

For this small project, I wanted players to draw their immediate attention to the table, using eye-catching objects such as cakes and food models to urge the player to inspect the area as soon as they came into the space.

My main theme for this space was to serve as a ‘Shrine’ – So adding food, plants and candles onto the table helped contribute to the theme alongside a large angel statue to amplify the calm and warm aesthetic to the space. Whilst a casual environment, this project was to mainly experiment with the different tools FrameVR had to offer such as it’s ability to import different models and images into the environment and even features such as the drawing whiteboard, text signs and screen sharing board.

Whilst it was interesting to explore FrameVR and it’s capabilities, I won’t be using this for my blockouts during the production of my VR project, mainly due to my familiarity with Maya as a software as well as Frame’s limitations such as limited polycount and the lack of experience with Frame’s modelling tool.

Nevertheless, This website was interesting to experiment with, especially when it came to understanding visual aesthetics when it came to capturing the user’s attention without making the area overcrowded.

Emerging Technologies Lab Exercises Production/Portfolio

References – Emerging Technologies

Emerging Technology – Prototyping Your Immersive Experience – Assets Lists Frame VR

dwij8405. (2023). Miguel O’hara Spiderman 2099 Rigged Textured [Model] Available at: [Accessed on 13/10/2023] 3D-Scans. (2023). Broken Wing Angel Statue 3D-Scan [Model] Available at: [Accessed on 13/10/2023]

3DMish. (2023). Pancakes [Model] Available at: [Accessed on 13/10/2023]

Parkinson, M. (2023). Strawberry Cream Cake (shortcake) [Model] Available at: [Accessed on 13/10/2023]

Ergoni. (2023). Cake Roll [Model] Available at: [Accessed on 13/10/2023]

chenkl. (2020). Stylized Salmon Nigiri Sushi PBR Learning [Model] Available at: [Accessed on 13/10/2023]

deekshitkandregula. (2022). Pomeranian – @Venkydeexu18 [Model] Available at: [Accessed on 13/10/2023]

Cookies and Cream. (2023). Vending Machine [Model] Available at: [Accessed on 13/10/2023]

Miku. (2018). Meiko Figurine [Model] Available at: [Accessed on 13/10/2023]

deratege. (2023). Sakura Bonsai [Model] Available at: [Accessed on 13/10/2023]

newmag2207. (2023). Bougies / Candles [Model] Available at: [Accessed on 13/10/2023]

Gatto, G. (2023). Japanese Tonkatsu Sign [Model] Available at: [Accessed on 13/10/2023]

Cookies and Cream. (2023). Pink Dessert Set [Model] Available at: [Accessed on 13/10/2023]

loriscangini. (2014). Kars [Model] Available at: [Accessed on 13/10/2023]

Emerging Technology – Prototyping Your Immersive Experience – Images Frame VR

Anka, K. (2023). Summer KeyFrame magazine Miguel Art Cover [Image] Available at: [Accessed on 13/10/2023]

Emerging Technology – Prototyping Your Immersive Experience – Music

Pemberton, D. (2023). Across the Spider-Verse (Intro) [Media] Available at: [Accessed on 12/10/2023]

Emerging Technology – Immersive User Experience (UX) and Augmented Reality (AR) – Images

Sony Pictures Animation (2023). Spider-Man animator gives that one tip that might make all the difference to budding animatorsABC News. [Image] 9 Jun. Available at: [Accessed on 30/10/2023]

Emerging Technology – Research Proposal – Videos and Templates

Kamppari-Miller, S. (2018). VR Sketch Sheets. [Image] Medium. Available at: [Accessed 01/11/2023].

Athena. (2023). Chainsaw Man Vol 12 PV. [online] Available at: [Accessed 01/11/2023].

Emerging Technology – Research Proposal – Research References

Vinney, C. (2023). UX for VR: Creating immersive user experiences. [online] Available at: [Accessed 15/10/2023].

Green, H. (2017). Here They Lie and the ethics of VR horror. [online] Game Developer. Available at: [Accessed 01/10/2023].

Campbell, S. (2019). Stuart Campbell: Sutu Eat Flies. [online] Available at: [Accessed 05/10/2023].

Loux, B. (2018). Step into the Weird World of Sutu’s Immersive VR Painting. [Online]. Available at: Step Into the Weird World of Sutu’s Immersive VR Paintings – VRScout [Accessed 05/10/2023].

Williams, K. HTC Vive Arts and V&A. Curious Alice: The VR experience. [Online]. Curious Alice: the VR experience · V&A ( [Accessed 05/10/2023]

Emerging Technologies (


Tangentlemen (2016). Here they Lie [Video Game]. Tangentlemen: Los Angeles. Available Online Accessed on: [01/11/2023]

Illumix (2019). Five Nights At Freddy’s AR: Special Delivery [Video game]. Illumix: USA. Available Online: Accessed on:[18/10/2023].

Niantic, Inc. (2016). Pokemon Go [Video Game]. Niantic, Inc: San Fransico, USA. Available Online: Accessed on: [18/10/2023].

Spiderman: Across the Spiderverse (2023). Directed by Justin K. Thompson, Joaquim Dos Santos, Kemp Powers. [Film]. California, United States: Sony Pictures Animation.​

Jojo’s Bizarre Adventure Anime: Part 2 – Battle Tendancy (2014). Directed by Toshiyuki Katō. Written by Hirohiko Araki [TV Programme]. Netflix.​

Fujimoto, T. (2018). Chainsaw Man. [Manga] Tokyo: Weekly Shōnen Jump.

Chainsaw Man Anime: Season 1 (2022). Directed by Ryū Nakayama. Written by Tatsuki Fujimoto [TV Programme]. Crunchyroll.

3D Character Animation Year 2

Character Animation – Rendering and Conclusion

Before rendering, I set up some basic lighting with each animation. For the Voice-over animation, I created a scene to mimic a TV interview, adjusting the camera close to Mimi’s face as she speaks. With the other two animations, I added cameras that would showcase the entire model and it’s motions. Once this was done, I rendered all my animations and edited in the voice lines for my final sequence.


The main aspect that I learnt were the importance of applying animation principles into 3D. I also learnt different techniques that helped with the planning as well as the conception of dynamic movements as well as teaching me the rigging process despite taking up the most time in the development process.

In the future, however, I would like to refine my animations. For the running and dance animation, it could be improved with the help of graph editing and further adjustments to the timing in order for her movements to appear more life like because especially in the dancing animation, certain movements feel too delicate. Another issue were weight painting errors such as clothes clipping and especially with the hands as they were dislocated from their wrists, causing the arms to look unnatural.

Therefore, in the future, I’d like to continue practicing and refining my skills in this medium and also enhance my first set of sequences in the process.

Collaborative Game Project Year 2

Dev Log #7 – Play testing and finalising the game.

1st May – 7th May

This week, we heavily focused on finishing the game as well as adding certain, small features that would help new players be able to navigate through the office levels.

Whilst my tasks, at this point were relatively complete. There were still some areas that needed work on – especially when it came to the script.

We asked a couple of individuals to playtest our game and provide some feedback. One of the main pieces of feedback given were the lack of hints throughout the game. Kuba and I had discussed the possibility of adding hints into our game last week, however we were unsure how to tackle this without making the key cards location too obvious.

Originally we had an idea to potentially add an audio queue to the keycard, so the closer the player was to the item, the louder the sound gets. However, for players that want to take their time reading exploring the narrative, this could become incredibly grating really quickly which could ultimately lead to them rushing through the game and having an overall unpleasant experience, leading to a lack of replayability. On the other hand, this could lead normal players to rush through the game without looking at any of the items and therefore feeling underwhelmed or confused by the experience.

I also came across a similar issue when looking back at last week’s post. The fact that players may not either have the time or the patience to look through different items in order to find the keycard, especially since the game’s relatively fast pace. It slows down the game’s pacing and could take the player out of the experience. So in order to counteract these issues and concerns, we added special dialogue for levels that require a keycard to access the next stage or an area in the level.

After completing the script, I finished off the project by completing my tasks in the hack n plan and logging in the hours completed throughout the project’s development.


As a short conclusion to this project, the game underwent some major changes from it’s initial planning and prototype stages mainly either due to time constraints, over ambition or simply stylistic choices.

Nevertheless, working with Kuba and Damien, even on a small project such as this, has taught me that communication is vital as well as keeping a consistent schedule. I felt, as a concept artist and writer, this was an important learning curve to undergo especially when communicating and receiving feedback from my group.

Throughout development, this has also helped me keep motivated as getting positive feedback as well as critique positively impacted our game today. It’s very polished in comparison to it’s prototype counterpart. The game, overall, turned out to be a unique piece that, as a group, we can’t wait to develop further as well as the universe surrounding it.

Gameplay Video

3D Character Animation Year 2

Character Animation – Animating Pipeline

Animation 1: Run Cycle

The run cycle took the longest amount of time due to the different moving parts, the changing directions of the limbs and the timing.

This was, however, the best sequence to start off with because this introduced me to the basics of animation.

I blocked out the key poses from the reference, starting off with anticipation, with the character floating down, adding in a squash and stretch in the action by having her land and jump back up again in preparation of the run.

The many key poses for this sequence was due to the clear difference in animation between the sources. The video source is from an 80’s anime – therefore there are lot of extra frames to provide a smooth action cycle. This meant it was best to use the stepped animation technique. However, this also means that the movement can be precise and it’s a good learning tool for positioning as well as experimenting with the straight ahead actio

This took the longest because of the limbs having to consistently move in time with one another. This also created arcs within the sequence as the limbs would slow in and out of position.

I had also started to notice issues with the weight painting as the gloves start to clip upon movement. I tried to fix this issue – without re-altering the model.

There where restrictions to chest movements, which prevented the model from twirling around. So this section had to be scrapped and replaced.

Once I finished making the first cycle of frames – I was able to duplicate them throughout the sequence. The blockout, timing and motion were the most important part with adding in-between frames making the movements slower but smoother. Then I went into the graph editor and translated the graphs to produce stepped animation.

Whilst the stepped animation does simulate that 2D style, the sequence became blocky and unnatural despite the use of extra frames.

Whereas in the interpolation version, the movements were still snappy but they were able to produce a much smoother outcome. I adjusted the timing with the legs and added neck movements as secondary actions to make the animation dynamic.

Animation 2: Dancing

I started with a simple block out of a couple of key poses rather than making all of them from the reference. This process is known as Pose to Pose method as this sequence won’t use the stepping animation style. This helps speed up the animation process as I don’t have to block out the hand gestures or neck movements till much later on.

The main focus is the coordinated movements. Whilst the character isn’t constantly moving, each limb moves at different times. For instance, the legs would slow in and out of movements whilst the arms would consistently create large arcs.

After blocking out the dance sequence, I adjusted the timing and added in-between frames to produce smoother movements. I also added in neck and hand gestures as secondary actions.

The only issue with this sequence however was the jump when it came to moving the controls. So instead, I added a small squash and stretch to the jump.

Animation 3: Voice-Overs

My final animation sets out to overcome the challenge of animating voice lines. Because of this, I don’t plan to make a overly dynamic sequence – instead this will be used to show blendshape capabilities.

In 2D animation, I’ve had previous experience in animating voice lines using the standard mouth guides as shown here: (2023). Image pack of Frontal Mouth shapes. Available at: [Image] [Accessed 13/04/2023].

However, I was also planning to experiment with a different variant of lipsyncing – mainly used in a lot of anime to simulate speaking. I used this video as a reference for this technique.

Saltydkdan (2018). How I made a Fake Pop Team Epic Skit (and how you can too). [Video] Available at: [Accessed 13/04/2023].

However, whilst I’m trying to imitate an anime style, if I were to only translate this method into 3D, it could possibly create stiff movements and therefore it’d be difficult to understand the character’s speech if we were to take out the voice lines.

So I decided to combine both methods together. I had a limited amount of blendshapes mainly due to the lack of flexibility in the models retapology but this meant I can experiment with different mouth shapes to create even more blendshapes possibilities.

So before I started animating the mouth, I worked on adding secondary actions by blocking out some movements with subtle queues in her body language. This allows her to become much more expressive, further adding to her appeal.

After finishing the blockout, I started to animate the facial expressions and mouth movements. So I used the straight ahead method to mimic the spoken lines. Her speech also slow in and out within the animation depending on the word’s duration.

Finally, I adjusted the timing of the different lines to sync the Mimi’s body movements along with adding her facial expressions into the final rendition, slowly becoming more apparent and her face moves away from the camera.

Collaborative Game Project Year 2

Dev Log #6 – Writing up a script and Creating Enemy Sprites

Date: 24th April – 30th April

This week heavily focused on finalizing any extra, smaller tasks as well as completing my main goals for the game. Writing up a script took up the most amount of time due to the amount of interactables that I wanted to add into the game. As a group, we have also acknowledged that, even if we didn’t have enough time to include all the dialogue into our project, the script would be able to give us an idea on how to move forward narratively or when we decide to further develop the game in the future.

Whilst making the script, I also decided to fully finish creating the enemy sprites for the game once again following the deterioration system I have mentioned in prior. This week, we have also decided to cut down the amount of levels for our game now going for only 6 levels rather than 8 due to further time constraints.

Writing up the Script

As I continued working on the script, I decided to add small notes for our programmer throughout as I split the script into 6 levels – each with their own set of unique items our main character could collect or read through.

I have also included two small cutscenes into the game – one being the elevator scene where the main character realises he’s stuck in the office building. This is the moment where his main goal of finding the stairs is presented to the player.

Another cutscene follows shortly afterwards, introducing the monsters to the player through the characters first encounter with them. This also plays as short tutorial which provides a small hint on how to deal with the monsters within the game.

He’s just as clueless as the player but as you pick up and read more information from certain items, the player can start to piece together what happened to the office.

Whilst these items are optional pieces and don’t necessarily provide large amounts of hints in terms of keycard locations it does further explore he narrative as well as expand the world building that surrounds the game whilst also not giving away too much information about the cause of the ‘outbreak’.

This sort of narrative exploration also draws in a niche player demographic as it allows them to come up with their own conclusions for the story and also gives them an opportunity to analyse the office further. This week, in total I’ve made 7 pages worth of dialogue for the game and this will be fully implemented by the end of next week.

Finally, here’s the script to review if you’d like to see the different dialogue and cutscenes added to the game.

Creating New Enemy Sprites

Over the last two weeks of work, I’ve been working on creating more enemy sprites whilst working on my main tasks. As mentioned in previous posts, I plan to make 6 different enemies for the game. Now with this cutback, however, this means that (potentially) a new enemy is introduced in each level. This system means that not only are the visual decay of the monsters still present within each floor but also means that it serves as a subtle visual indicator for the player, showing how much progress they’ve made throughout the game.

All enemy sprites based on deterioration

These are all the enemy sprites that have been made for the game, some design changes had to be made due to the 16px aesthetic limitations. However, the decay is clearly present within each mutation. I tried to stay close to the enemy designs as I possibly could especially when it came to colour schemes.

In terms of the movement for these enemies, I decided to only make forward and backward facing sprites as they appear more threatening. They were also a lot easier to animate given the time constraints. Once again this style of maneuver is also heavily inspired by Faith’s design as they move in a very aggressive and intimidating fashion.

FAITH WIKI (2022). Offering Ending [Image] Available at: [Accessed on: 27/04/2023]

The decision to only make forward and backward facing enemies also comes from our test enemy as shown above. They’re much more taller and faster than the player but their movements are also obscured by the ‘blink’ screen. This compromise saved quite a bit of time during development as well as create some horrifically unnatural movement

Upon our final week of development, we will discuss the results of playtesting and some final touches being added to the game, including rewriting certain bits of dialogue and trying to add hints for keycards that will be directly conveyed to the player.