• Visualisation of 3D Objects in 2D

You must have seen the objects you are of different shapes and sizes. You are always surrounded by shapes like triangles, squares and circles. Have you ever observed the shape of your house? They have length, breadth and height and so they are called as 3 dimensional or 3D shapes. Let us now learn more about them. Let’s do it right now.

Suggested Videos

What are 2d objects.

2 -D figures are nothing but the two-dimensional figures. A shape or a figure that has a length and a breadth only is a 2D shape. The sides are made of straight or curved lines. They can have any number of sides. Triangle and squares are the examples of 2D objects. We will classify figures we have seen in terms of what is known as the dimension.

A plane object that has a length and a breadth only have 2 Dimensions. They can be drawn on a screen or a paper. The following are the examples of 2D objects.

3D Shapes

3-D figures are nothing but the three-dimensional figures. In our day to day life, we see several objects like books, ball, ice-cream cone etc, around us which have different shapes. One thing common to most of these objects is that they all have some length, breadth and height or depth.

Therefore they all occupy space and have three dimensions. These objects cannot be drawn onscreen as it is a three-dimensional figure. The following are the examples of 3D objects.

3D Shapes

Nets for Building 3D Shapes

“A net is a two-dimensional representation of a three-dimensional figure that is unfolded along its edges so that each face the figure is shown in two dimensions. In other words, a net is a pattern made when the surface of a three-dimensional figure is laid out felt showing each face of the figure.”  You have different nets for the different shape.

3D Shapes

Different Views of 3D Shapes

There are different views of 3D shapes such as top view and side view.

3D Shapes

For example, the top view of a cylinder will be a circle but its side view will be a rectangle.

Isometric Sketch

Isometric sketch of any shape is a 3D projection of that shape and all its lines are parallel to the major axis. Also, the lines are measurable. For example,

3D Shapes

Three cubes each with 3 cm edge placed side by side to form a cuboid. Sketch an isometric sketch of this cuboid and find dimensions of the cuboid in cm.

Solution: Three cubes are side by side to form a cuboid. So, only one of the dimension change and the other two will remain the same. Two out of three dimensions of the cuboid will be 2 and the third will be 2 + 2 + 2 = 6. Thus, the dimensions of the cuboid are 2 × 2× 6

Solved Examples for You

Question 2: What cross-sections do you get when you cut a candle horizontally?

Answer : C is the correct option. You get a circle when you cut a candle horizontally.

Question 2: A bulb is burning just right above the ball. Name the shape of the shadow obtained.

Answer : A is the correct option. The shape of the shadow obtained will be of a ball.

Question 3: What are 3d shapes examples?

Answer: A cube, sphere, cone, rectangular prism, and cylinder are the basic 3-dimensional shapes that we observe around us. One can observe a cube in a Rubik’s cube as well as a die. Similarly, one can observe a rectangular prism in a box and a book. Also, a sphere can be observed in a globe while cone can be observed in a cone in a carrot.

Question 4: Explain the definition of 3d shapes?

Answer: 3D shapes refer to solid objects that consist of three dimensions. These dimensions are length, height, and width. 3D shapes objects happen to have depth to them.

Question 5: Explain what is a 3d triangle?

Answer: The tetrahedron refers to the three-dimensional case of the general concept of a Euclidean simplex. Furthermore, one may thus also call it as a 3-simplex. In the case of a tetrahedron, the base happens to be a triangle.  Therefore, a tetrahedron is also called as a “triangular pyramid”.

Question 6: Explain the face on a 3d shape?

Customize your course in 30 seconds

Which class are you in.

tutor

Boxes and Sketches

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Download the App

Google Play

Tech Differences

Know the Technical Differences

Difference Between 2D and 3D Shapes

2D shapes Vs 3D shapes

These are very common terms and frequently heard overtimes. We use 2 and 3 dimensions in several areas such as in engineering drawing, computer graphics, animations, and in real life too. In this article, we are going to understand the difference between 2D and 3D shapes.

Content: 2D Shapes Vs 3D Shapes

Comparison chart.

  • Key Differences

Definition of 2D Shapes

We can consider that the shapes which can be produced on a flat surface are said to be 2D (dimensional) Shape. In other words, the shapes that only have length and width are the 2D shapes.

Now, what a 2D shape is? Before understanding the 2D shape, we must know what a 0D object is, which means there is no dimensions. A 0D shape is defined by a point.

Applications

All the parallel projections and one-point perspective projections in plans of some object are made in 2D. Geological maps also made in 2 dimensions, in which we use the method of contouring to show the depth with the help of different shapes, even in oceanography also.

2D shapes examples

Definition of 3D Shapes

3D shapes are solid shapes, unlike 2D shapes which are produced by combining 3 Dimensions – length, width and height. The real-life examples of these shapes are buildings, balls, boxes, anything that has 3 dimensions. Let’s consider a cuboidal building which is built with length, width and height is a 3D shape.

These are used in several applications, such as in 3D animations, 3D designing of some product building, bridge, tools, 3D graphs, maps etcetera. The 3D shapes help in showing the depth of the object. To illustrate the 3D in engineering, we use 2 and 3 point perspective projection and orthographic projection.

3D shapes examples

Key Differences Between 2D and 3D Shapes

  • The prior difference between the 2D and 3D shapes is that in 2D shapes only two axis are incorporated x and y-axis. On the other hand, in 3D shapes, the three axis x, y and z-axis are covered.
  • Square, circle, triangle, rectangle are the various 2D shapes while cube, sphere, cuboid, are the 3D shapes.
  • The plan in engineering drawing, which represents in top view, front view side view of some object are made in 2D. Hence we use 2D shapes in these type of plans. Conversely, for representing three dimensions, the isometric and orthographic projections are used for rendering the 3D objects.
  • We can use the manual or automatic methods of creating the 2D and 3D shapes, and there are several softwares used for doing this. However, 2D shapes are easy to create, whereas 3D shapes are challenging to build.
  • The 2D shapes show all the edges of that shape, but in 3D shapes, these edges could be hidden. For example, in a square, all the edges are visible. However, if we take an example of the cube, then, it is not possible to display all of its edges from one angle.

From the above discussion, the basic difference between 2D and 3D shapes must be clear to you. So, 2D shapes are elementary flat shapes and have only 2 dimensions – x and y-axis. In contrast, 3D shapes contain three dimensions – x, y and z, in other terms these shapes have the volume too.

Related Differences:

  • Difference Between View and Materialized View
  • Difference Between Forward Engineering and Reverse Engineering
  • Difference Between RGB and CMYK
  • Difference Between Object and Class in C++
  • Difference Between Tree and Graph

visitacion odtohan says

July 8, 2020 at 9:33 am

This is very helpful to me as reference for my module. Thank you so much for publishing it.

Tiger shroof says

May 21, 2021 at 2:49 pm

The answer are correct but there are some words that were very diffcuilt to understand

October 12, 2021 at 9:22 am

Gopi patel says

February 5, 2022 at 7:05 am

I have been wondering about this topic of late and was thinking of searching the internet to increase my knowledge. Thank God that I visited your website otherwise I wouldn’t have come across such a nicely-written article. Once I liked this particular post, I was going through few other posts as well. I simply loved them! Kudos to the writers who work hard to write these.

Sara Zytelewska says

June 17, 2022 at 6:48 am

i like this

Jabir hussaini says

October 20, 2022 at 2:34 pm

It’s Good to cover this information. Thank all who advised about this💪

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

All About Maths

2d representations of 3d shapes; drawing and constructing shapes; loci (higher - unit 3).

  • Teaching guidance

Teaching resources

  • Exam resources

Specification references (in recommended teaching order)

Lesson plans, supplementary resources, homework sheets.

Please note that, due to the overlap of C/D questions, some material may have been tagged as appropriate for both Foundation and Higher tier. Equally, due to the nature of some questions, different parts of a question may assess different topics. Teachers are advised to ensure learners have covered all of the question content before using such questions.

2d representation of 3d objects

Create, edit, and share assessments quickly and easily with Exampro Mathematics

Quick links

  • AQA exams and results services

Registration Required

Please wait while your request is being verified...

The home of mathematics education in New Zealand.

  • Forgot password ?
  • Resource Finder

Representing 3D objects in 2D drawings

The Ministry is migrating nzmaths content to Tāhurangi.             Relevant and up-to-date teaching resources are being moved to Tāhūrangi (tahurangi.education.govt.nz).  When all identified resources have been successfully moved, this website will close. We expect this to be in June 2024.  e-ako maths, e-ako Pāngarau, and e-ako PLD 360 will continue to be available. 

For more information visit https://tahurangi.education.govt.nz/updates-to-nzmaths

This unit develops students’ ability to represent three dimensional objects using two dimensional representations.

  • Use plans from different viewpoints to represent 3D objects.
  • Draw isometric drawings of 3D objects.
  • Create nets for polyhedra.
  • Interpret the above representations to create a model of the 3D object.

In this unit students learn to use two different two-dimensional drawings to represent three-dimensional shapes. 

The first type of drawing is the use of plan views. These views are usually from the top, front, and side, as you would see in house plans. Such views are called orthogonal, meaning that the directions of sight are at right angles to each other. The images below show an example of plan views.

The top, front, and side view of a 3D shape built with cubes.

The second type of drawing is perspective, using isometric paper. True perspective shows objects getting smaller as they are further from the point of sight. Iso means “same” and “metric” means measure, so isometric paper shows every cube as the same size. Therefore, an isometric drawing shows a three-dimensional model from a single viewpoint, and distorts the perspective the eye sees. This can be seen in the image below. 

A 3D cube-tower represented on isometric paper.

In this unit students also work to develop flat patterns (nets) for simple solids, such as pyramids and prisms. A pyramid consists of a base, that names the solid, and triangular faces that converge to a single vertex, the apex. A hexagonal-based pyramid has a hexagonal base and six triangular faces. A prism has two parallel faces, that also name the solid, and parallelogram shaped faces. In right-angled prisms those faces are rectangles. Therefore, prisms have a constant cross-section when ‘sliced’ parallel to the naming faces.

Hexagonal pyramid

Hexagonal pyramid                             Hexagonal prism

The flat surfaces of a three-dimensional solid are called faces. All faces must be connected in a net, the flat pattern from which the solid can be built. However, not all arrangements of faces will fold to the target solid. In the net for a hexagonal pyramid there needs to be one hexagon and six triangles, arranged in a way that means when folded there are no overlapping and no missing faces.

The learning opportunities in this unit can be differentiated by providing or removing support to students, and by varying the task requirements. Examples include:

  • using physical materials, such as connecting cubes and connecting shapes, so students can build the models they attempt to draw
  • beginning with simple cube structures and solids, then building up with competence and confidence
  • scaffolding the drawing of a net by rolling the solid through its faces, and sketching around the outside of each face, in turn, to form the net
  • u sing online drawing tools at first, particularly for isometric drawing, to facilitate visualisation, and encourage risk taking
  • encouraging collaboration (mahi tahi) among students.

The contexts for this unit can be adapted to suit the interests and cultural backgrounds of your students. Select three-dimensional structures that are meaningful to your students. Mathematical solids are often used in construction, such as the shape of traditional Māori food stores (Pātaka), climbing equipment in playgrounds, and iconic structures around the world, e.g. pyramids of Egypt, high rise buildings. 

Consider how you can make links between the learning in this unit, and other recent learning. For example if you have recently visited a local marae, your students might be engaged by the concept of drawing the floor plan of the marae. 

Te reo Māori vocabulary terms such as āhua ahu-toru (three-dimensional shape), āhua ahu-rua (two-dimensional shape), āhua (shape), inerite (isometric), tukutuku inerite (isometric grid), raumata (net of a solid figure), tirohanga (perspective), tirohanga pūtahi (one point perspective), and the names of different shapes could be introduced in this unit and used throughout other mathematical learning.

  • Multilink cubes or another form of connecting cube
  • Polydrons or other form of connecting polygons
  • Objects from around the classroom or 3D solids (cube, pyramid, cone, cylinder, sphere, etc.)
  • Digital camera
  • Cardboard, tape or glue
  • Protractors and rulers
  • PowerPoint 1
  • PowerPoint 2
  • PowerPoint 3
  • PowerPoint 4
  • PowerPoint 5
  • Copymaster 1
  • Copymaster 2
  • Copymaster 3
  • Use plan views to represent three-dimensional models made from cubes

In this context, you might draw on the knowledge of community members (e.g. builders, architexts) and have them show architectural plans to your students.

  • Use slides one and two of PowerPoint 1  to introduce the idea of architectural plans. What are these pictures used for? Discuss the idea of flat (two-dimensional) drawings used to show the structure of three-dimensional structures.  Sometimes more than one drawing is needed. Why? Tell the students that today they are becoming architects.
  • Give a student ten multilink cubes and ask them to create a building for you. The only stipulation is that faces meet exactly. Show the class the building laying flat on your hand. After a few seconds of viewing time, place the building flat on a desktop. Move to the front of the desk, crouch down and take a digital photograph with the house at lens level. Draw what you think the photograph looks like.
  • Let students sketch their idea of the viewpoint. Show the photograph on an interactive whiteboard, television, or using a projector. How do the cubes show in the photograph? (as squares) Why do they appear that way? (Only one face of each cube is visible) What strategies did you use to get the viewpoint correct? (layers or columns, relative position, etc.) Be aware that many students are likely to have attempted to show depth in their pictures. Point out that the camera can only capture what it sees. 
  • Repeat the exercise for the right-hand and birds-eye views. Are students understanding that information is lost when a 3D object is represented in 2D diagrams?
  • Show slide three that depicts a correct layout for plan views. Plan views are often called orthogonal because they are at right angles to faces of the model. How many cubes make up this building? How do you know? (9 cubes)
  •  Ask students to take nine or ten cubes and make their own building. Put your building on a desktop. Draw your building from the front, right side and top. Provide grid paper ( Copymaster 1 ) to support students with drawing squares.
  • Roam the room to support students. Taking digital photographs of their models and showing the image is useful for students who find it hard to minimalise the information they show.
  • After a suitable time, ask students to bring their plan views to the mat, and to leave the model on a desktop somewhere in the classroom.  Gather all the plans, shuffle them, and deal out one per student. Can you find the model that goes with the plan?
  • Let students have a suitable time to locate the models. Some plans may have more than one appropriate model.
  • Discuss the features they looked for in locating the model. Which plan was the most useful? (A key point is that one viewpoint, often the top, is a good screening tool for possible models. Other views can be used to confirm the correct model).
  • Return to Slide Three. Ask the students to use the cubes from their previous model to build a structure that matches the views. Once they believe they have a correct model, students can justify their answer to a partner. Animate Slide Three to reveal the correct answer. Discuss the use of the top view to organise the information from the other views.
  • Slides Four and Five have two other plan view puzzles. Animating each slide provides a model answer.
  • Students might also work on the Figure It Out activities called Building Boldly and X-ray vision . PDFs are available for the student pages.

Session Two

  • Coordinate different views of the same structure to form a model of it.
  • Represent cube models with isometric drawing.

Before class gather at least five different shaped objects from around the classroom. The objects might be mathematical models (e.g. cube, pyramid, sphere, etc.) or common objects (book, cone, bottle, box, etc.) or a combination of things. It is better if the objects are different heights.

  • Using a large sheet of paper placed on a desktop, draw a grid of squares. 10cm x 10cm squares are a good size. Arrange the objects at different locations on the grid. Take digital photographs with the grid at lens level. Capture views from all four compass points.
  • Place the grid on the mat or on a central table. A spy took these photographs of an enemy city. She took four pictures, one from each of the compass points. After returning to her base she emailed the images. Show the students all four views of the ‘city’ on an electronic whiteboard or using a data projector.  Your job back at Kiwi Intelligence is to construct a plan map of the city. You know there are these buildings (objects). Look carefully at the photographs to work out where to put each building.
  • Let students sketch a birds-eye view of the city. They might name the buildings (object) on their plan rather than draw the shapes. After a suitable time, gather the class to decide where to position each object. Look for students to coordinate views to do so.
  • It is common for travellers to create optical illusions of places they visit. Images appear impossible, such as someone holding up the leaning tower of Pisa. Slide One of PowerPoint 2  has an illusion like that. How do these tricks work? (Objects that are further away look smaller, even to a camera. That is called perspective) How do artists adjust what they draw to allow for perspective? (Show Slide Two)
  • One way to represent cube models is to use isometric drawing. That method does not have the vanishing points of perspective drawing, but it does partly show that the object is three dimensional. In isometric drawings things that are further away do not get smaller, but all parallel edges remain parallel (Show Slide Three)

Four cube-models of increasing complexity represented on isometric paper.

  • Ask students to create a model made from interlocking cubes. A maximum of ten cubes is wise. Students sketch their models on isometric paper. The sketches can be given to a partner who makes the model. Sometimes different models can be made for the same drawing.
  • Begin with the front-most cube.
  • Hold the model so the leading edges face up and down.
  • Build the drawing across and up first.
  • Create an L shape for cubes that come out at right angles.
  • Watch for parts of faces that might be visible.
  • Imagine a light shining on the model from behind (to shade faces)
  • How many cubes are needed to build this model?
  • Is that the smallest possible number of cubes?
  • What is the largest number of cubes that could be in the model? 
  • Challenge students with the Figure It Out activity, Cube Creations . In the task students firstly build models from isometric drawings and join the models to create cubes. The second challenge is for them to create a cube puzzle of their own and draw the pieces (models) using isometric paper.

Session Three

  • Connect plan views and isometric drawings for the same three-dimensional cube model
  • Discuss the strategies that were helpful to producing a correct drawing. Ideas might include:
  • Identifying which direction is the front.
  • Starting with the front most stack of cubes.
  • Building the ground layer first before building up.
  • Considering the cubes that cannot be seen.
  • Erasing unwanted lines.
  • Shading faces as you go so the blocks look solid.
  • Engage in reciprocal partnerships (tuakana teina) again. Both partners draw a model they create. They choose three plan (front, top, side) or isometric views to draw. The other partner creates a different drawing of the same model, then builds it to check.
  • Ask students to work on the Figure It Out activity called A Different View . In this activity students match isometric views, with directional arrows, to the corresponding plan views from those perspectives. They also draw 2D representations of everyday objects such as cups and paper clip holders. A PDF of the student page and answers are provided. An extension activity can also be found in Missing Anything , a Level 4+ Figure It Out page.  

Session Four

  • Create nets for simple solids (prisms and pyramids).
  • Begin this session by showing your students some graphics of simple solids. Slides One to Three of PowerPoint 4  show three types of prism, triangular, hexagonal and rectangular (cuboid). Show each solid in turn, and ask: Where are you likely to see a shape like this? What is the shape in this picture called? What are the shapes of its faces? How many vertices (corners) and edges does it have?
  • Discuss:  What do all three solids have in common? The common properties that define a prism are, a solid that has two identical parallel faces and all other faces are parallelograms. Slide Four shows a loaf of bread being sliced. How are a loaf of bread and a prism the same?
  • Prisms are sometimes defined as solids with constant cross section. Slices of bread are a similar shape. It is the cross-section that determines the name of a prism. Slide Five shows a pentagonal prism as the cross section is a five-sided polygon.
  • Show Slide Six.  Here is a rectangular prism shaped box that holds soap powder. Imagine that I open out the packet to form the flat pattern that makes it. Sketch what you think the net will look like.
  • Do students attend to the shape of faces in constructing the net?
  • Do they visualise the effect of folding up faces?
  • Do they consider which sides of the net will need to be the same length for edges to form correctly?
  • Do they consider tabs needed for gluing the net together? (Usually every second side of the net.)  
  • Provide Slides Seven and Eight for students to create their own nets. The triangular and hexagonal prisms are more challenging than the cuboid, particularly getting the angles and side lengths correct. You may need to support some students to create 60⁰ internal angles for equilateral triangles and 120⁰ angles for regular hexagons. Use protractors to get accurate measures.
  • Show Slide Nine that shows the three nets. What is the same about all three nets? (Rectangular faces in a line) What is different about the three nets ? (Parallel faces that create the cross section) How can you tell how many rectangular faces the prism needs ? (The number of rectangular faces equals the number of sides on one of the parallel faces) Visualise the net for the pentagonal prism. What does that net look like?   (Five rectangles in a line, with two pentagonal faces).
  •  Slide Ten gives images of three pyramids; tetrahedron (triangular based), square based, and hexagonal based pyramid. What are these three-dimensional shapes called? I n what way are the solids related? Look for students to discuss the properties of a pyramid; a base of a given shape, triangular faces that meet at an apex.
  • Construct a base that is a regular polygon (same side lengths and angles)
  • Arrange the triangular faces so they emanate from each side of the base shape
  • Construct isosceles triangles with two equal sides for the lateral faces.
  • Constructing a pentagonal, octagonal or dodecagonal based pyramid is an excellent challenge for students who are competent. Let students investigate the problem in pairs and record their ideas.
  • Use the models of prisms and pyramids to look at the number of faces, edges, and vertices in each solid. Discuss systematic ways to count. For example, to count the edges of a prism, count around each parallel face and add the lateral edges.
  • Create tables for the solids you have models for.

Look at the table for prisms together (see below):

  • The number of faces is two more than the number of sides in the cross section. Why?
  • The number of edges is three times the number of sides in the cross section. Why?
  • The number of vertices is two times the number of sides in the cross section. Why?  

​​​Look at the table for pyramids together (see below):

  • The number of faces is one more than the number of sides of the base shape. Why?
  • The number of edges is double the number of sides in the base shape. Why?
  • The number of vertices is one more than the number of sides of the base shape. Why?

Session Five

  • Establish whether, or not, a given net for a simple solid is viable.
  • Visualise which sides and corners of a given net will meet when the net is folded.
  • Begin with Slide One of PowerPoint 5  that shows a viable, though unconventional, net for a triangular prism. Will this net fold to form a solid? Which solid will it create? How do you know? (Consider the number and shapes of faces, the result of folding)
  • Mouse click and a single corner of the net will be highlighted. Imagine this net is folded. What other corners of the net will meet?
  • Mouse clicks reveal the other corners that meet.
  • Another mouse click shows a side of the net. Which other side meets this one when the net is folded? How do you know?
  • Mouse click to see the other side that connects.
  • Discuss how many corners meet to form a vertex (three) and how many sides form an edge (two).
  • Ask similar questions for Slides Two and Three that show other nets.
  • Provide students with Copymaster 3  that contains a set of similar folding puzzles for different solids. Students might work in collaborative, small groups and justify their solutions to each other. Tell them that their first task is to decide whether, or not, the net folds to make a solid.
  • Imagine the net is folded and tracking the destination of corners and sides as they form vertices and edges of the target solid.
  • Consider the properties of the target solid, e.g. parallel faces of a prism, corners of a pyramid converging apex.
  • Eliminate obvious corners and sides first.
  • Recognise when the positioning of shapes in a net, results in overlaps, or omissions of faces, in the target solid.
  • Challenge students to create similar puzzles for their classmates. The net may be possible or impossible and they should choose a corner and a side that challenges the solver.
  • Caught in the nets
  • Loads of lamingtons
  • Nutting out nets

Log in or register to create plans from your planning space that include this resource.

  • Printer-friendly version
  • Increase Font Size

10 3D Object Representations

T. Raghuveera

Objectives:

  • Understand Mesh Modeling
  • Understand spline based modeling.

Discussion:

3D Object Representations:

So far we have discussed only the 2D aspects of Computer Graphics. Let’s focus on the 3D part of Computer Graphics. In short 3D is simply an extension of 2D with an extra dimension called the ‘depth’, added. In reality the objects in the world are 3D, and thus 2D is only a hypothetic case. Most of the transformations, algorithms, techniques that we have learnt in the context of 2D, can be extended for 3D by adding the third coordinate, the z-coordinate. The aim of this study of 3D Computer Graphics is to imitate the real world as closely as possible. 3D modeling and rendering software APIs like OpenGL, DirectX enable creation, representation, manipulation and rendering of 3D scenes.

Now the fundamental question is how to create / model / represent 3D objects? There are various techniques and tools used to create them. Some of them are mesh, splines, fractals, NURBS, BSP Trees, Octrees, particle systems, physically based modeling, kinematics, Extrusion, sweep representation, Constructive Solid Geometry and so on. Each of these or a combination of these can be used to model / represent / render 3D objects. We needed a vast collection of modeling tools, because of the highly complex nature of a wide variety of real world objects that we wish to model.

The first step in the process of modeling is to create a skeletal representation of the object, and then apply various shading and rendering procedures to make it a real world look- alike object. Towards achieving this visual realism, the steps are

  • Mesh / Spline model
  • Shade model
  • Texture model
  • Light model
  • Environment map

Fundamentally an object is modelled in either of the two ways.

Surface modeling (boundary representation)

Solid Modeling (Space-partitioning representation)

Surface Modeling: Here we try to model only the outer surface / skin of the object that encloses the object, i.e., we ignore the inside of it assuming that the inside is empty space. Example techniques: Mesh, splines, NURBS, sweep representations.

Solid Modeling: Here we try to model the outside as well as inside of the object.

Example techniques: Constructive Solid Geometry (CSG), Octrees, BSP Trees

Just because the inside of an object is of less interest in general, it is the surface modeling that we will use often, but only when it is really necessary to model the inside we will go for solid modeling. It is also unnecessary and wastage of resources to model the inside of an object, when we never have something to do with the inside.

Polygonal / Mesh Modeling:

Let’s look at the figures above, each of them are modelled using mesh. A Mesh is a collection of connected polygonal faces / polygons. i.e., we try to approximate the shape of an object by set of connected polygons. The process of modeling a surface using a collection of triangles is called triangulation or tessellation . If you observe the fish shape above, it is all triangles, but the sphere object is made up of quadrilaterals for its entire shape except for top and bottom parts which use triangles.

Consider the standard shapes as shown here, a standard Cube and a Cylinder. A cube has 6 polygonal faces that can be easily modelled using 6 connected polygons, while cylinder has one single curved surface which can be modelled using many thin connected polygonal strips as shown. So it can be inferred that with polygonal mesh modeling, surfaces that are flat are best modelled, while spherical or curved or other irregular surfaces can only be approximated. So cube is best modelled while cylindrical surface is approximated. The word ‘approximation’ is used while modeling non-flat surfaces because, we never get the true underlying surface using meshes, for instance, look at the sphere model above, its shape is only approximated by many connected polygons. The more the number of polygons used to model the better is the approximation to the underlying smoother surface. The essence is that we are trying to use flat planar polygons to model non-flat smoother / curved surfaces, the net effect is ‘approximation’.

To construct or represent a shape using a polygonal Mesh, we need to start with vertices, connect them with edges, join edges to for polygons, connect many polygons to form a face / surface of the object. Observe the figure below.

  Polygonal mesh data is represented by the data structure comprising of vertices, edges, faces, surfaces, surface normals. As shown in the figure below, mesh data is entered into various lists like, vertex list, edge list, surface / face list, normal list.

Vertex List – {V0 (x0,y0,z0), V1 (x1,y1,z1), V2, ….., V7(x7,y7,z7)}

Edge List – {E0 – (V0V1),

E1  – (V1V2)

E2 – (V2V3)

——-E11 – (V0V7)},

Face List – {S0 (E2, E9, E6, E10),

S1 (E0, E1, E2, E3),

S2 (E5, E9, E1, E8),

S3 (E4, E11, E0, E8),

S4 (E4, E5, E6, E7),

S5 (E7, E10, E3, E11)}

Normal List – { n 0 , n 1 , n 2 , n 3, n 4, n 5 }

Consider a standard unit cube as shown in the diagram above. Vertices are where few or more edges meet. Edges connect vertices. Faces are formed by planes containing edges and Normals are for faces. The mesh model can be represented through various lists as given: Vertex list containing vertex information, Edge List (12 edges, E0 to E11) containing Edge information, where each edge connects two vertices, Face list (6 in number from S0 to S5), containing list of surfaces / faces, Normal List (6, from n 0 to n 5 one for each face) containing normal vector information.

It is important to store normal vector information for a face in the normal list. A normal is the direction perpendicular and outward to the plane of the polygon. This data is important because the orientation of a face towards the light source is better understood from its normal vector. It is also useful to compute the angle between the viewing direction and the  direction of light source, which is a clue to compute the intensity of the pixel at point of incidence. These lists mentioned comprise the geometric information of the mesh model. Along with this we can also store attribute information (color, texture etc.,) in separate lists.

These are purely mathematical modeling tools that can be used to model planar, or non-planar or curvy or irregular surfaces. Splines are used in automotive design, CAD etc., Splines are fundamentally space curves, and when they are extended in two different directions, they form spline surfaces. With spline surfaces we can model, flowing cloth, water bodies, wrap a spline surface around another object and much more. Spline curves give utmost flexibility to designers to design / model shapes of their choice. There are a variety of splines discussed in literature viz:

  • Hermite Splines
  • Cardinal Splines
  • Bezier Splines
  • Basis Splines
  • NURBS (Non-Uniform Rational Basis Splines)

Spline definition:

Any composite curve formed with polynomial sections satisfying specified continuity conditions at the boundary of the pieces.

If the above definition is closely observed, it is evident that, a spline curve is formed by joining many smaller curves end to end to form a single continuous smoothly flowing spline curve, and at each joint the curve satisfies specified continuity conditions. Each of the smaller curves is a polynomial function, so the spline curve is a polynomial curve.

Spline Terminology:

  • Parametric forms
  • Control Points
  • Convex Hull
  • Boundary conditions
  • Order and degree
  • Approximating
  • Interpolating
  • Blending functions
  • Local Control
  • Cox-De Boor recurrence relations

Spline curves are represented in parametric form, which uses a parameter U that varies in the range 0 to 1.

P ( u ) = (x ( u ), y ( u ), z ( u )) where the parameter u varies from 0 to 1

  For a cubic spline:

x( u ) = a x u 3 + b x u 2 + c x u + d x

y( u ) = a y u 3 + b y u 2 + c y u + d y

z( u ) = a z u 3 + b z u 2 + c z u + dz

P(u) = a u 3 + b u 2 + c u + d

The curve P (U) is a cubic parametric curve, and is a function of x(u), y(u), z(u), each of which is a cubic parametric polynomial curve by itself. And coefficients, a (ax, ay, a z), b (bx, by, bz) and c (cx, cy, cz) are vectors. The parameter u is raised to the highest power 3 (called the degree of the curve), so we call the curve a cubic curve.

Due to the flexibility that the spline curve offers, the designer has the freedom to decide on the shape of the curve, i.e., the shape of the curve is fully controlled by the designers. The designer has the task of choosing or placing points in some sequence in 3D space, such that the points when connected or approximated by curves, gives the desired shape. It also similar to fitting a polynomial curve section on a given set of points placed strategically. These strategically positioned sequence of points that are meant to control the shape of the curve are called control points .

In real world there are many analogies to spline curves, for ex: while drawing kolams/    Rangolis during festive occasions, a set of dots are initially placed and they are approximated by curves of choice. Another example being, a road side welding shop where doors, windows, gates, grills of iron are made, it starts by placing long strips of iron on a table that has sequence of strategically arranged firm protrusions or posts on it and using a hand bender they bend the iron strip along the protrusions, to give it a nice curvy shape of choice.

Curves that exactly touch these control points are called interpolating curves and curves that approximate the control points are called approximating curves.

As spline is a piece-wise polynomial curve, the many smaller curve sections that make up the curve, must join smoothly at the boundary. To determine how smoothly curve sections join at the boundary, we need to check for boundary conditions / continuity conditions. There are two types of continuity checked for, parametric continuity and geometric continuity .

Parametric continuity:

Two curves, joining at a boundary are said to exhibit levels of continuity as below.

Two curves satisfying C0 or Zeroeth – order parametric continuity, just meet at the boundary. The two curves appear as if they have one common boundary position, but do not appear to join smoothly.

Two curves satisfying C1 or first – order parametric continuity, not only meet at the boundary, but their tangents are also equal. The two curves appear as if one curve is smoothly joining with the second curve.

Two curves satisfying C2 or second – order parametric continuity, not only meet at the boundary, but their tangents are also equal, and their curvatures are also equal. The two sections appear as if they are pieces of one single smooth curve joined together.

The higher the degree of the curve sections, the smoother they will be joining at the boundary. Some more examples of splines are shown below.

In the above figures, the spline curve is formed by 4 control points, P0, P1, P2, and P3. The curve touches P0 and P3 and approximates P1, and P2.

  • The two fundamental ways of modeling are surface modeling and solid modeling
  • Mesh modeling is simpler, and superior for modeling flat surfaces, but can only provide approximation for smoother surfaces.
  • Splines are space curves, which can be used to model any variety of shapes and surfaces
  • Splines are either interpolating type or approximating type.
  • Splines should satisfy continuity conditions at the boundary.

************************************************************************************************

  • Open access
  • Published: 16 June 2022

2D and 3D representation of objects in architectural and heritage studies: in search of gaze pattern similarities

  • Marta Rusnak   ORCID: orcid.org/0000-0002-5639-5326 1  

Heritage Science volume  10 , Article number:  86 ( 2022 ) Cite this article

1945 Accesses

2 Citations

Metrics details

The idea of combining an eye tracker and VR goggles has opened up new research perspectives as far as studying cultural heritage is concerned, but has also made it necessary to reinvestigate the validity of more basic eye-tracking research done using flat stimuli. Our intention was to investigate the extent to which the flattening of stimuli in the 2D experiment affects the obtained results. Therefore an experiment was conducted using an eye tracker connected to virtual reality glasses and 3D stimuli, which were a spherical extension of the 2D stimuli used in the 2018 research done using a stationary eye tracker accompanied by a computer screen. The subject of the research was the so-called tunnel church effect, which stems from the belief that medieval builders deliberately lengthened the naves of their cathedrals to enhance the role of the altar. The study compares eye tracking data obtained from viewing three 3D and three 2D models of the same interior with changed proportions: the number of observers, the number of fixations and their average duration, time of looking at individual zones. Although the participants were allowed to look around freely in the VR, most of them still performed about 70–75% fixation in the area that was presented in the flat stimuli in the previous study. We deemed it particularly important to compare the perception of the areas that had been presented in 2D and that had evoked very much or very little interest: the presbytery, vaults, and floors. The results indicate that, although using VR allows for a more realistic and credible research situation, architects, art historians, archaeologists and conservators can, under certain conditions, continue to apply under-screen eye trackers in their research. The paper points out the consequences of simplifying the research scenario, e.g. a significant change in fixation duration. The analysis of the results shows that the data obtained by means of VR are more regular and homogeneous.

Graphical Abstract

2d representation of 3d objects

Introduction

Scientific fascination with art and the desire to understand an aesthetic experience provided an opportunity to discover many facts about visual processing [ 1 , 2 ]. The use of biometric tools to diagnose how people perceive art has led to the creation of neuroaesthetics [ 3 , 4 ]. One of such tools is an eye-tracker (ET), which allows one to register an individual gaze path by use of fixations (pauses lasting from 66 is 416 ms [ 5 ]) and saccades (attention shifts between one point of regard and another [ 6 , 7 ]). Assigning such visual behaviors to predefined Areas of Interest—AOIs—makes it possible to analyze numerous parameters. For example, for each such area one can calculate the number of people who looked at it (visitors), the time they spent doing so (total visit duration) or fixations number recorded within it. It is also possible to determine precisely when a given area was initially examined (time to first fixation) and how long a single fixation lasted on average (average fixation duration).

Application of eye trackers has allowed researchers to get a closer look at the issue described by numerous sociologists, urban planners, historians of art, conservators, and architects of the past. We distinguish four environments for conducting ET research. Stationary devices enable the study of images displayed on the screen, e.g. photos presented to tourists [ 8 ]. Mobile ET may assist in conducting research in the natural environment [ 9 , 10 ]. Through the combination of mobile ET and other methods, advanced analyses of architectural and urban space can be conducted [ 11 ]. Recent technological advances have allowed ET to be combined with augmented reality (AR) [ 12 ]. The fourth type, the combination of an eye tracker and VR goggles has already opened up several new research perspectives. It is possible, for example, to analyze the perception of reconstructed historical spaces [ 13 , 14 ]. This new research environment has made it necessary to clarify any unclear points, especially those concerning validity of prior research done on three-dimensional objects presented as flat images. A visualization displayed on a screen as a flat image is a considerable simplification of the same object seen in reality or presented in the form of an omnidirectional contents [ 15 ]. While eye-tracking research on paintings [ 16 ] or pictures displayed on a screen [ 17 , 18 ] provokes numerous controversies[ 19 ], similar studies may be even more questionable if 3D objects—such as sculptures, landscapes or buildings—are presented as 2D stimuli [ 20 ]. When scientists choose how to present the given object (e.g. the size of the screen [ 21 ], the perspective from which it is seen [ 22 ], its luminance contrast levels [ 23 ], image quality [ 24 ]), they cannot be sure how such decisions affect the results of the conducted research. Similarly, research dealing with hypothetical issues—e.g. related to nonexistent interiors or alternative versions of an existing structure [ 25 ]—simply cannot be done in real-life surroundings. Due to technological limitations, scientists had to use visualizations that would then be displayed on a screen. Nowadays, when we have portable eye-trackers making it possible to display augmented reality stimuli or eye-trackers applicable in VR HMD (head mounted display) [ 26 ], should scientists dealing with the perception of material heritage give up on simplifying the cognitive scenario [ 27 ]? Should they stop using stationary eye trackers? To what extent do photomontages and photorealistic photos capture the specific atmosphere of a given monument [ 28 ]?

Gothic cathedral hierarchy

Research relating to Gothic architecture makes it possible to state that constructors of cathedrals aimed at a harmonious combination of form, function and message [ 29 , 30 , 31 ]. It is believed that alterations in the proportions of these buildings and the appearance of the details in them were directly related to the mystical significance of such spaces [ 32 ]. The same can be said about the altar. Presbytery, closing the axis of a church, is the most important part of such a structure [ 33 ]. The shape of the cathedral’s interior should facilitate focusing one’s eyesight on that great closure, making it easier for the congregation to concentrate on the religious ritual taking place there [ 29 , 34 ]. It is the belief of various scholars that the evolution in thinking about the proportions of sacral buildings led medieval builders to deliberately lengthen the naves of their cathedrals to enhance the role of the altar. This theoretical assertion was confirmed by the author in her 2018 research dealing with the so-called longitudinal church effect [ 35 ].This experiment, employing eye trackers and flat stimuli proved that longer naves facilitated the participants’ concentration on the altar and its immediate surroundings; the viewers also spent less time looking to the sides and the way in which they moved their eyes was less dynamic. However, the conclusions of that research indicated a need for further methodological experiments [ 35 ] in the field of eye tracking related to the perception of architecture.

Eye tracking and heritage

Research centered on stimuli displayed in VR makes use of the same mechanisms as portable eye trackers. The difference is that fixations are not marked on a film recorded by a camera aimed at whatever the experiment’s participant is looking at, but registered in relation to objects seen by such a person in their VR goggles ( https://vr.tobii.com/ ). Eye trackers combined with VR HMD remain a novelty, but such a combination has already been applied in research into historical areas [ 36 ], landscape and cityscape protection [ 37 ]. One should not neglect the variety of ways in which other types of eye trackers have been used in studies over cultural heritage. In the study of the perception of art, stationary eye-trackers [ 38 ], mobile eye-trackers [ 39 ] and those connected with VR [ 13 ] have already been used. This technology has been used to analyze perception of paintings [ 40 ], behaviors of museum visitors [ 41 , 42 ], perception of historical monuments [ 43 , 44 ], interiors [ 45 ], natural [ 46 ] and urbanized environments [ 47 ] perceived by different users [ 48 ]. It is not difficult to notice the growing interest in the so-called neuro–architecture [ 49 ].

Eye-tracking heritage belongs to this dynamically developing field [ 50 ]. There is a strong scientific motivation to have a better look at the pros and cons of this kind of research. Implementations of new technologies is often characterized by a lack of critical methodological approach [ 51 ]. The quality of data depends on multiple factors, one of which is the system in which the research is conducted [ 52 ]. Researchers who compare the characteristics of the data obtained with different types of ET have indicated the pros and cons of various environments used for gathering behavioral data [ 53 , 54 , 55 , 56 ]. For instance, what has been studied is the difference in how people perceive their surroundings when they are having a stroll through the city and just watching a recording from such a walk [ 53 ] and several fascinating differences and similarities were observed as far as the speed of visual reactions is concerned. Prof. Hayhoe’s team noticed a similar change when the act of looking at objects was changed from passive to active, which resulted in significantly shorter fixation times [ 57 ]. One might also wonder whether the results of comparison of passive and active perception will match the tendencies described by Haskins, Mentch, Botch et al .  according to which the observations that allow head movement “increased attention to semantically meaningful scene regions, suggesting more exploratory, information-seeking gaze behavior” [ 56 ]. If it works that way, those viewers who are allowed to move their heads should spend most time looking at the presbytery and the vaults [ 35 ]. On this level, it is also possible to raise the question, whether the visual behavior will change in the same way as has been observed in the museum when comparing scan paths in VR and reality. Would there be a tendency to switch focus rapidly if the average fixation duration decreased in VR [ 54 ]?

In 2018, what was available was research using flat visualizations of cathedral interiors examined with a stationary eye tracker or experiments done in a real-life situation with a portable eye tracker. As of today, eye tracking devices and the software that accompanies them permit experiments in virtual reality environments. Undertaking the same topic using a more advanced tool might be not only interesting but scientifically necessary.

Materials and methods

Research aim.

The most basic aim of the research was to compare the data presented in the aforementioned paper [ 35 ], where the experiment’s participants looked passively at flat images, with the results obtained in the new experiment employing VR technology, which allowed movement. The author intended to see whether her previous decision to use a stationary eye tracker and simplify the cognitive process was a valid scientific move. Did the research involving even photorealistic flat displays—photographs, photomontages, and visualizations of architecture—yield false results? This comparison is meant to bring us nearer to an understanding of the future of the use of stationary eye trackers in the care of heritage.

Basic assumption

The methodology of the VR experiment was as similar as possible to the one applied to the previous experiment involving a stationary eye tracker. The most important aspects were connected with selection of participants, presentation of stimuli, the number of additional displayed images and duration of analyzed visual reactions.

Research tools and software

The set used in the experiment consisted of HTC Vive goggles and Tobii Pro Integration eye tracker [ 58 ]. The virtual research space had been orientated using SteamVR Base Station 2.0. The eye-tracking registration was done on both eyes with a frequency of 120 Hz, accuracy of 0.5°, and the trackable field of view was 110°. Additional spherical images were produced using Samsung Gear 360. Tobii Pro Lab 360VR software was used in the experiment. The model of the cathedral and its versions, as well as the final stereoscopic panoramas were developed in Blender, version 2.68. Most of the materials come from free online sources. The remaining textures were created in Adobe Substance Painter.

Participants

In accordance with the conclusions drawn from previous studies, the number of participants was increased. 150 people were invited to take part, nevertheless after careful data verification, only 117 recordings were considered useful. All volunteers were adults under 65 (Fig.  1 ), with at least a post-primary education, born and raised in Poland, living within the Wroclaw agglomeration. Being European, living in a large city, and obtaining at least basic education makes the observers more than likely to be familiar with the typical appearance of a Gothic interior.

figure 1

Age and sex distribution. 150 volunteers, 117 valid recordings: male = 43, female = 74, age median = 32

Analogically to the previous research, the participants were only accepted if they had no education related to the field of research (i.e. historians of art, architects, city planners, conservators, and custodians were all excluded). The experience of professionals influences their perception of stimuli relevant to their field of practice [ 59 , 60 ]. Another excluded group—on the basis of the information provided in their application form—were those who had already taken part in the previous research since they might remember the displayed stimuli and there would be no way of verifying to what extent that affected the results. The preliminary survey was also intended to exclude people with major diagnosed vision impairments—those with problems with color vision, strabismus, astigmatism, cataract, impaired eye mobility. Participants could use their contact lenses. However, no optometric tests were conducted to verify the participants' declarations. From the scientific perspective, it also seemed crucial that the participants have a similar approach to the employed VR environment. None of the participants possessed such a device. 102 volunteers had had no experience with VR goggles. The other 15 people had had some experience with VR, but of a very limited sort—only for 5–10 min and not more than twice in their lifetime. Therefore it is reasonable to claim that the participants constituted a fairly homogeneous group of people unfamiliar with the applied scientific equipment.

Personalization of settings and making the participants familiar with a new situation

One significant difference in relation to the original research was the fact that the device itself is placed on the head of a participant and that such a person is in turn allowed to move not only their eyes but also their neck or torso. A lot of time was spent on disinfecting the goggles, making sure the device is worn properly and taking care of the cables linking the goggles with a computer so that they obstructed the participants’ head movements as little as possible. (Some participants, especially petite women, found the device slightly uncomfortable due to its weight.) Three auxiliary spherical images, unrelated to the topic of the research, were used to adjust the spacing of the eye tracker lenses as well as to demonstrate the capabilities of the headset. This presentation was also intended to lessen the potential stress or awkwardness that might stem from being put in an unfamiliar situation; it also gave the researchers a chance to make sure the participants see everything clearly and feel comfortable. When both the participants and the researchers agreed that everything had been appropriately configured, the former were asked not to make any alterations or adjustments to either the headset or the chair once the experiment commenced. The methodological assumption was to stop the registration if the process of preparing participants after they had put on the goggles took longer than 10 min so that the effects of fatigue on the registration were insignificant. After the preparation was finished, one researcher observed the behavior of the participant and another researcher supervised the process of calibration and displaying of the presentation including the stimuli. Notes were made, both during and after the experiment, on the participants’ reactions or encountered difficulties. Participants touching the goggles and any perceived problem with stimulus display also led to the data being considered invalid (Fig.  1 ).

Used spherical stimuli

The inability to move decreases the level of immersion in the presented environment [ 61 ]. However, in order to make the comparison with the previously performed studies possible, the participants were not allowed to walk. For this reason, only stationary spherical images were used in the research (Fig.  2 ). The ratios, the location and the size of used details, materials, colors, contrasts, or intensity and angle at which the light is cast inside the interior were all identical to the ones used in the previous research. Just like then, three stimuli of varied nave length were generated. The ratios of the prepared visualizations were based on real buildings of this kind. The images were named as follows: A3D—cathedral with a short nave, B3D—cathedral with a medium-length nave, and C3D —cathedral with a long nave (Fig.  2 ).

figure 2

Spherical visualizations and division into AOI. A3D—flattened spherical image prepared for the cathedral with a short nave, A3D spherical image for the cathedral with a short nave, which presents a part of the nave and the presbytery, B3D spherical image for the cathedral with a medium-length nave, C3D spherical image for the cathedral with a long nave, AOI INDICATORS—the manner of allocating and naming Areas of Interest. AOI names including the phrase 3D describe all elements not seen in the previous research. AOI names ending with 2D describe the elements shown in the previous research (Marta Rusnak on the basis of a visualization done by Wojciech Fikus).

Spherical images and flat images

The most striking difference between the two experiments is the scope of the image that the participants of the VR test were exposed to. The use of a spherical image made it possible to turn one’s head and look in all directions. Those who took part in the original test saw only a predetermined fragment of the interior. For the purpose of this study, this AOI was named “Research 2D”. The range of the architectural detail visible in the first test was marked with a red frame (Fig.  2 ). It might seem that the participants exposed to a flat image saw less than one-tenth of what the participants of the VR test were shown. This illusion stems from the nature of a flattened spherical image, which additionally enlarges the areas located close to the vantage point by strongly bending all nearby horizontal lines and surfaces. The same distortion is responsible for the fact that the originally square image here has a rounded top and bottom. An aspect where achieving homogeneous conditions proved difficult was the attempt to achieve the same balance of color, contrast and brightness on both the computer screen and the projector in the VR goggles. The settings—once deemed a satisfactory approximation of those applied in the original test – were not altered during the experiment.

The experiment, like the original research, included nine auxiliary images (Fig.  3 ). The presented material consisted of spherical photos taken inside churches in Wrocław, Poland. Each image, including the photos and the visualizations, was prepared so that the presbytery in each case would be located in the same spot, on the axis, in front of the viewer (Fig.  4 ).

figure 3

Auxiliary images. a Church of the Virgin Mary on Piasek Island in Wrocław. b Church of Corpus Christi in Wrocław. c Church of St. Dorothy in Wrocław. d Dominican Church in Wrocław. e Czesław's Chapel in the Dominican Church in Wrocław. f Church of St. Maurice in Wrocław. g Church of St. Ignatius of Loyola in Wrocław h. Church of St. Michael the Archangel in Wrocław I Cathedral in Wrocław. (fot. MR)

figure 4

Location of equipment in the lab. The idea was to maintain consistency between the lab space and the way spherical stimuli were presented.

Additional illustrations play an important role. Firstly, they made it possible to ensure that the studied interiors were not displayed when the participants behaved in an uncertain manner, unaware of how to execute the task at hand. Additionally, all photos showed the interior of churches. So, just like believers or tourists entering the temple, the participants were not surprised by the interior's function.

Just like in the previous research it was important to make it impossible for the participants to use their short-term memory while comparing churches with different lengths of nave, so each participant’s set of images included only one of the images under discussion (A3D—short or B3D—medium-length or C3D—long). Therefore three sets of stimuli had been prepared. The auxiliary images ensured randomness since the visualization A3D, B3D or C3D appeared as one of the last three stimuli. Each included instructional boards informing the participants about the rules of the experiment and allowing an individual calibration of the eye tracker, one of the three analyzed images and nine auxiliary images in the form of the aforementioned spherical photos. Analogically to the original experiment, the participants were given the same false task—they were supposed to identify those displayed buildings which, in their opinion, were located in Wrocław. This was meant to incite a homogeneous cognitive intention among the participants [ 62 ] when they looked at a new, uniquely unfamiliar interior. Since it was expected that the VR technology would be a fairly new experience for the majority of the participants, it was decided that the prepared visualizations would not be shown as one of the first three stimuli in a set. Other images were displayed in random order.

During that previous test, a trigger point function of the BeGaze (SMI) software was used, which allowed automatic display of an image once the participant looked at a specific predefined zone on the additional board. In Tobii ProLab [ 58 ] such a feature was not available. All visual stimuli were separated from one another by means of an additional spherical image. When participants looked at a red dot, the decision to display the next image had to be made by the person supervising the entire process.

The time span of registration that had been calculated for the original study was kept unchanged and amounted to 8 s. Unusual study time results from the experiment conducted in 2016 [ 14 ]. Comparisons are thus possible. However, this proved insufficient for the participants—5 out of 7 volunteers invited for preliminary tests and not included in the group undergoing analysis, expressed the need to spend more time looking at the images. It was suggested that the brevity of the display made the experience uncomfortable. Therefore the duration of a single display was increased to 16 s, but only the first eight were taken into consideration and used in the comparison.

Preparation of the room

Looking at a spherical image in VR did not allow movement within the displayed stimuli therefore the participants were asked to sit, just like in the study involving a stationary eye tracker. A change in the position of the participant’s body would require a change in the height of the camera and, as a result, a change in the perspective and that might have an adverse effect on the experiment. The room was quiet and dimming the light was possible. The positions of the participant’s seat and the VR base unit were marked on the room’s floor so as to make it possible to check and, if necessary, correct accidental shifts of the equipment (Fig.  4 ). It was important since any shifts in the position of the equipment would affect the orientation of the stimuli in space, making the direction in which the participant is initially looking inconsistent with the axes of the displayed interiors.

This section of the article describes the new results and then compares them to the previous observations of flat images.

Numerical data were generated using Tobii Pro Lab. Using the data from the registrations, collective numerical reports were prepared. Five main types of eye-tracking variables were analyzed: fixation count, visitors number, average fixation duration, total fixation duration, time to first fixation. Reports were generated automatically in Tobii Pro Lab. XLS files have been processed in Microsoft Excel and in Statistica 13.3. Graphs and diagrams were processed in PhotoShop CC2015.

In the end, biometric data was gathered correctly from 40 people who had been shown stimulus A3D, 40 people who had been shown stimulus B3D, and 37 people who had been shown stimulus C3D. That means that for various reasons 22% of registrations were deemed unusable for the purposes of the analysis (Fig.  1 ).

Due to the change in equipment and the environment of the experiment, more Areas of Interest were analyzed than during the previous study [ 35 ]. The way people looked at the spherical images was analyzed by dividing each of them into ten AOIs. Their location and names can be seen in Fig.  2 . In the frame located in the center one may find AOIs with a caption saying “2D”—that is because they correspond to the AOIs used in the flat images in the first study. All these AOIs can be summed up into one area named Old Research. Five more AOIs, which are placed over the parts of the image that had not been visible to those exposed to the flat stimuli, were captioned with names ending with “3D”.

Fixation report was done for the entire spherical image but also for the five old and five new research AOIs (Table 1 ). The number of fixations performed on entire stimuli A3D, B3D and C3D is not significantly different. Many more fixations were done on the Old Research AOI. The number of fixations performed within the five new AOIs decreased as the interior lengthened, whereas the values for the fields visible in the previous test increased.

To determine interest in particular parts of a stimulus, we can look at how many times those parts were viewed. Table 2 lists all 10 analyzed AOIs and their values. When viewing the data, it is important to consider the different numbers of participants who were shown the examples. Therefore the numbers are additionally shown as percentages. On inspection of the first part of the table it is apparent that the values presented in the five columns of new AOIs represent a decreasing trend.

With the extension of the interior proportions, three old AOIs gain more and more attention. The largest and most significant difference can be seen in the number of observers of the Vaults 2D AOI. For Floor 2D AOI, and Left Nave 2D AOI the number is decreasing, while the number of people looking at the Presbytery AOI is slightly increasing.

Another analyzed parameter is average fixation duration. Despite the noticed differences, the one-way ANOVA data analysis showed that all observed deviation in the fixation duration should be seen as statistically insignificant [ANOVA p > 0.05, due to the large number of groups compared, p values are placed in the Table 2 (line10 and 15)]. Despite this result, it is important that all three examples of the Presbytery 2D AOI have the longest-lasting fixations and for C3D the average fixation duration exceeded the value of 207 ms. This shows how visually important this AOI becomes for the interior with the longest nave.

The last generally analyzed feature is total visit duration. By far the largest difference in its value was recorded for the Vaults 2D AOI. Those exposed to stimulus C3D spent twice as much time looking at the vaults as did those exposed to stimulus AS. For the longest interior the attention time also increased for Right Nave 2D AOI, while the Floor 3D AOI shows almost no changes in visit duration. Data analysis shows that all observed deviations concerning average fixation duration and total visit duration are statistically insignificant (ANOVA F (2,117) / p > 0.05, exact values are in the Table 2 —line 10 and 15). This insignificance stems from the fact that the stimuli in question differ only slightly. The only thing that actually changes is the fragment of the main nave that is most distant from the observer. The data interpretation to follow is based on a general analysis of different aspects of visual behaviors and establishing increasing or decreasing tendencies between the AOIs for cases A3D, B3D and C3D. These observations are then juxtaposed with the relationships between stimuli observed in the 2D experiment. However, before engaging in a comparison, one needs to be certain that the cropped image chosen for the 2D experiment really consists of the area that those exposed to the interior looked at the longest.

One of the primary ideas behind doing research on the same topic again was to check the validity of the methodology used previously, including the author’s choice of how much of the interior was shown in the original stimuli. Should the participants of the VR experiment look for a very limited time at this area and instead find other parts of the building more attractive, it might suggest that some errors were made when making assumptions about either the previous research or the current one. In that case it would be impossible to make a credible comparison of the results obtained during those two experiments. A general analysis was therefore done for the Research 2D AOI. The value that perhaps best testifies to one’s cognitive engagement is the total visit duration calculated for all the AOIs visible in the previous research. The average value of this parameter amounted to 4.70 s, which is almost 59% of the registration span, for stimulus A3D; 4.14 s, which is 52% of registration span, for B3D; and 5.14 s, which is slightly over 64% of registration span, for C3D. What is interesting, an average of nearly ¾ of the participants’ fixations took place within the area presented in the previous research (70.1% to 75.9%) (Table 3 ).

That means that fixations within the Research 2D AOI were focused and that the participants made few fixations when looking to the sides or to the back. If those points of focus were more evenly distributed, one would be entitled to assume that no part of the interior drew attention in particular. However, these results suggest that the Research 2D AOI really did include the most important architectural elements in such a religious building as far as the impression made on an observer is concerned. This allows a further, more detailed analysis and comparison of the data obtained using a VR headset and those acquired by means of a stationary eye tracker.

Subsequent comparisons are made for single AOIs. In order to facilitate the comparison of the data gathered in both experiments, the names of the stimuli are expanded with distinctions of length: short cathedral A3D, medium-length cathedral B3D, long cathedral C3D. A graphic representation, which depicts all the AOIs for all three new spherical stimuli and three flat stimuli from old research, was prepared to allow a clear comparison of a large portion of numerical data. The upper part shows the schematics for the interior with a short nave (A2D and A3D), followed by the interiors with a medium-length nave (B2D and B3D), while the longest (C2D and C3D) are displayed at the bottom. Such graph was filled in with values corresponding to the given analyzed aspect: number of observers (Fig.  6 ), average fixation duration (Fig.  7 ), total visit duration (Fig.  8 ). Values inserted in the middle section were compared with the schemata shown above and below (Fig.  5 ). The range of the difference influenced the color-coding of the particular AOI, as determined in the attached legend. When the values were equal, the AOI was cream-colored. When the analysis showed a lower value, it was colored pink, and when an increase was noticed, it was colored green. Additional graphs, under described schemes, support the comparison.

figure 5

The principle of color coding used in Figs.  6 , 7 and 8

figure 6

Visitors count—omparison between three interiors short, medium and long nave . a 2D flat stimuli b 3D spherical stimuli. The charts c and d present only the data from the Old Research Area.

figure 7

Average fixation duration—comparison between interiors with short, medium-length and long naves. a 2D flat stimuli, b 3D special stimuli. c A box plot—comparison between fixation duration for the entire three new and three old stimuli. Each box was drawn from Q1—first quartile to Q3—third quartile with a vertical line denoting the median. Whiskers indicate minimum and maximum. Outliers were sown as points ± 1.5 interquartile range.

figure 8

Total visit duration—comparison between three interiors short, medium and long nave a 2D flat stimuli b 3D spherical stimuli. The charts ( c ) and ( d ) present only the data from the Research 2D Area.

The volunteers were able to make a series of movements looking up, down, to the back or to the sides. Changes in such activity may be analyzed by the number of people looking at particular AOIs. Figure  5 shows the combination of the three schemata and also includes the number of observers for each AOI. The upper part shows green coloring of most newly added AOIs in stimulus A3D. That suggests an increased movement of the participants. The 3D AOIs in the bottom part of the figure, which shows the data for stimulus C3D, is dominated with red color. It is very easy to notice the gradual decrease in the number of participants looking in other directions than forwards when comparing those exposed to stimuli A3D, B3D, and C3D. At this level of generalization the new study confirms previous findings. The vault and the presbytery, for example, were most willingly observed by people looking at interior C2D. One can easily notice that the number of those looking to the sides decreased along with an increase in the length of the nave. Many more people neglected looking at the Left Nave 3D AOI. It is plain to see that the longest nave discouraged the observers from looking to the sides of the spherical image.

People observing 2D images on a screen also looked sideways more when the church was shorter and less when its nave was lengthened. It has to be pointed out that there were relatively few of those who looked to the back of the spherical image. Here one could observe the same tendency as with looking to the sides—the longer the nave, the smaller the need to divert one’s gaze from the central part of the image. That is perhaps why not even one out of 37 people exposed to C3D looked at the back of the image during the 8 s that were recorded and analyzed. Similarly to the results of the previous research, the longer the nave became, the fewer people chose to look upwards at the Vaults 3D AOI. More people were inclined to look at the Vaults 2D AOI as the nave got shorter, which is visible in the visitors count.

Despite the differences in results obtained in the 2D and 3D experiments, the main tendencies proved not to be contradictory to each other as one may see in the graphs in Fig.  5 . It can be concluded that the most prominent deviation occurs in the Floor 3D AOI. It appears that the inability to look at other parts of the interior made the participants of the 2D study pay attention to the Floor AOI to a much greater extent. As the length of the nave grew, the attention paid to the Floor AOI decreased in both experiments. The deviations related to the other AOIs turned out to be not considerable. This seems particularly important in the case of the most important part of the interior—the presbytery. The author was afraid that the different nature of illustration switching (automatic in 2D versus manual in 3D) might adversely affect the number of participants looking at this AOI. However, nothing to that effect has been observed—the results for the Presbytery 2D AOI in both experiments are extremely similar.

Table 2 indicates that deviations in fixation duration are not statistically significant. Nevertheless, allowing a comparison of all fixations on all stimuli seems important (Fig.  7 ). Comparisons demonstrate many similarities between the 2D and 3D experiments.

In the 3D study the average duration of a single fixation within the Presbytery 2D AOI increases with the length of the nave. The duration of an average fixation for the Old Vaults AOI increases slightly as the nave gets longer. The same tendency was noticed in the research done on flat images even though the differences between the stimuli noticed in the current study were smaller in degree. In both cases visual magnetism of the vaults and the presbytery rises as they get further and further away from the observer.

A significant change should be noted in the average duration of a fixation for the entire image (A3D = 130 ms vs A2D = 290 ms, B3D = 136 ms vs B2D = 263 ms, C3D = 139 ms vs C2D = 271 ms). The average duration of fixations recorded for images displayed on the screen was twice as big as for those shown in the VR environment. A similar trend of shortening the duration of fixations from cognitive fixations (150–900 ms) into express fixations (90–150 ms) [ 63 ], although not as striking as in the research under discussion was noticed in VR by Gulhan et al. [ 54 ].

The comparison of the time the participants spent looking at individual AOIs in the 2D and 3D studies turned out to be very important. It allows one to clearly assess the impact of the way the interior is presented on how it is perceived (Fig.  8 ).

In the 2018 experiment the time the participants spent looking outside the screen would decrease along with the increase in the interior’s length. It was interpreted as a sign of boredom—the longer interior was slightly more complex and therefore kept the participants engaged for a longer period of time. In the 3D experiment those exposed to the stimulus with the medium-length and the longest nave respectively spent less and less time looking outside the Research 2D AOI. Such results, very similar in both experiments, allow one to state that the lengthening of the cathedral’s interior helps keep the interest of participants in the Presbytery 2D AOI and the areas closest to it.

Most AOIs have maintained a decreasing or increasing direction of change. However, one tendency was altered and seems contradictory to the hypothesis being verified. In the VR tests it turned out that the more distant—and therefore smaller—the presbytery was in the image, the less time on average the participants spent looking at it. This tendency could not be observed in the previous study.

Diagrams and graphs in Fig.  8 facilitate the comparison of the total visit duration results for the 2D and 3D tests. The diagram on the right shows more regularity. The analysis of the graphs show below makes it possible to see that there are two deviations in the graph on the left, of which the one related to the presbytery causes the most concern. In the case of the data from the VR recording, four out of five dependencies are regular. The scale of the irregularity related to the Left Nave 2D AOI in the VR experiment (bright blue line) is similar to the Right Nave 2D (dark blue line) in the 2D experiment. Large differences in the shape of the green and yellow lines in the charts (Fig.  8 c, d) that reflect how the total visit duration changed for the Floor 2D AOI and the Presbytery 2D AOI indicate that these two areas require analysis. The noticed deviations, in contrast to other AOI visible in both experiments, proved statistically significant (ANOVA, floor F (5,193)  = 38,29; p = 0.034/presbytery F (5,193)  = 78,21 p = 0.041). The attention paid to the floor is distinctly lower for all spherical stimuli, but decreases along with the increase in the length of the interior in both the 2D and 3D studies. On the other hand, the tendencies that characterize the visual attention paid to the presbytery in the flat-image research and in the research done on spherical images are visibly different. It is extremely interesting that astonishingly similar results were obtained for the Vaults 2D AOI (red lines).

In comparison to other eye-tracking tests mentioned in the introduction, the fact that useful data was obtained from 117 people is more than satisfactory. From a traditional point of view, the sample is small so the research has potentially low power, which minimizes the likelihood of reproducibility. This test also has a weakness of evaluating only three variations of one particular case study, which in addition is a computer-generated image and not a real-life interior. scientifically beneficial. Should one come up with such a three-part experiment, the methodology could be prepared with more precision and deliberation.

A similar experiment carried out on the basis of an actual interior and involving three different recording methods—stationary, mobile and VR eye tracking—would be very interesting and.

A different way of switching between successive stimuli in the 2D and 3D tests also may have affected the results of the comparison as far as the visual attention paid to the presbytery is concerned. Since in the 3D test it was not possible to establish field triggers that would automatically switch the images, one potential result is the lower number of fixations within the Presbytery 2D AOI. However, as evidenced by the comparison of the graphs in Fig.  6 , this difference in the switching method could not have had a major impact.

Despite using the same graphics program for both the 2D and 3D visualizations, the differences in the quality of the images displayed in two different environments in the two parts of the experiment may have influenced the results [ 64 ]. However, it must be considered as an element that cannot be separated from the chosen diagnostic tool. The construction of the VR goggles disturbs the assessment of distance regardless of the graphic quality of the displayed stimuli [ 65 ]. Unfamiliarity with the new technology may also create some minor difficulties as many people seem to find judging distances in VR difficult [ 66 , 67 ]. The fact that most deviations are not accidental and rather logically related to the previous test seems to suggest that the influence of this aspect on the research results was not significant.

Another element that might have influenced the results of the VR research was physical discomfort—mostly neck fatigue—connected with the weight of the device the participants put on their heads and observed in other cases of VR eye-tracking research [ 68 , 69 ]. However, the participants shown all three stimuli were equally exposed to this inconvenience, which makes it fairly impossible to affect the results and therefore the validity of the research.

The values discussed in the paper suggest that, even though a flat image is a major simplification of a real-life cognitive situation, the dominating reactions of the participants were sufficiently similar. Even though some differences can be observed, it is possible and justified to draw the same conclusion—the observers’ attention was directed much more effectively at the presbytery when the church’s interior was longer. Only three of the numerous analyzed features did not support the hypothesis of the existence of the longitudinal church effect. Even though the convergence is not perfect, it can be asserted that science confirms the genius of medieval thinkers and architects.

The amount of time the volunteers spent looking at the vaults did not change much between the two experiments. According to the comparative analysis, a shorter fixation time was observed in 3D. Fixation duration is usually regarded as an important indicator of visual engagement [ 70 ]. Aside from the examples of A2D and A3D, there were no significant differences in the number of observers who desired to view the vaults. 82% of participants observed vaults visible in A2D, while 50% of observers observed vaults in A3D (Fig.  6 ). This does not support the finding of Amanda Haskins’s team according to which an area attractive in a 2D experiment will prove even more attractive in 3D [ 56 ]. The results obtained in the research under discussion shows that AOIs that were visually attractive in the 2D test remained to a large extent equally attractive in the 3D test, whereas the AOIs that were less visually appealing in 2D, became even less appealing in 3D since the attention they would receive was divided between them and AOIs that did not exist in the 2D experiment. The fact that much larger deviations are noticed in 2D than in 3D suggests that it is preferable to study and discuss perception of architecture on the basis of 3D images or—should 2D need to be employed—using not one single flat image but several of its variations (just like three interiors of different length were used in this research) to facilitate more detailed and more precise comparison that in the end will result in more credible conclusions.

Conclusions

Despite the previously mentioned drawbacks and numerous doubts about research conducted with VR sets, the results obtained with spherical stimuli showed greater regularity than those collected during the 2D research. With the popularization and development of VR, some disadvantages of this technology are likely to diminish or disappear. Although 3D stimuli seem to be a better choice, the study does not assert that flat stimuli and stationary eye trackers should be abandoned as far as architectural research is concerned. The results of the comparative analysis also indicate that it is better not to derive conclusions from the data obtained from one single image but rather from a set of the given image’s slightly modified versions. If that is for some reason impossible, the absence of other values available for comparison makes evaluating and drawing correct conclusions fairly unreasonable. For this reason, it is advisable to use stationary eye tracking for research on issues related to cultural heritage that are less specific and more general.

The composition presented in the study has a simple, symmetrical layout with rhythmically repeating elements. In this way, a flat representation of the interior resulted in a reasonable interpretation of the results. It can be assumed that the more complex the composition to be analyzed with an eye tracker, the greater the impact of deviations resulting from the flat representation of the image.

Moreover, since the deviations in the gaze paths between a flat and spherical representation of the same faithfully modeled interior are easily noticeable, it seems obvious that they would be much greater should one use sketches, drawings or paintings instead of more real-life, photo-realistic stimuli. It appears only reasonable that implementation of such images would further distort the credibility of the obtained data and therefore they should not be employed in research on the visual perception of architecture or other tangible heritage. Studies that have had this form should be reviewed.

Data availability

The of datasets generated from 68 participants and analyzed during the current study are available in the RepOD repository. Marta Rusnak, Problem of 2D representation of 3D objects in architectural and heritage studies. Re-analysis of phenomenon of longitudinal church, RepOd V1; A lack of consent from all participants means other eye-tracking data are not publicly available. In line with the applicable legal and ethical principles, such a right belonged to volunteers. All graphic materials are included in the supplement. Study consent forms are preserved in the WUST archives.

Abbreviations

Eye-tracker

  • Virtual reality

Augmented reality

Two dimensional

Three dimensional

Area of Interest

Dieckie G. Is psyhology relevant to aesthetics? Philos Rev. 1962;71:285–302. https://doi.org/10.2307/2183429 .

Article   Google Scholar  

Makin A. The gap between aesthetic science and aesthetic experience. Journal of Consciousness Studies 2017; 24 (1-2):184-213;

Zeki S, Bao Y, Pöppel E. Neuroaesthetics: the art, science, and brain triptych. Psych J. 2020;9:427–8. https://doi.org/10.1002/pchj.383 .

di Dio C, Vittorio G. Neuroaesthetics: a review. Curr Opin Neurobiol. 2009;19(6):682–7. https://doi.org/10.1016/j.conb.2009.09.001 .

Article   CAS   Google Scholar  

Poole A, Ball L. Eye tracking in human-computer interaction and usability research: current status and future prospects. In: Encyclopedia of human computer interaction. 2006. p. 211–9. https://doi.org/10.1016/B978-044451020-4/50031-1

Duchowski AT. Eye tracking methodology theory and practice. London: Springer-Verlag; 2007.

Google Scholar  

Holmqvist K, Nyström M, Andersson R, Dewhurst R, Jarodzka H, van de Weijer J. Eye tracking. A comprehensive guide to methods and measure. Oxford: Oxford University Press; 2011.

Michael I, Ramsoy T, Stephens M, Kotsi F. A study of unconscious emotional and cognitive responses to tourism images using a neuroscience method. J Islam Mark. 2019;10(2):543–64. https://doi.org/10.1108/JIMA-09-2017-0098 .

Dalby Kristiansen E, Rasmussen G. Eye-tracking recordings as data in EMCA studies: exploring possibilities and limitations. Social Interact Video-Based Studies Hu Social. 2021. https://doi.org/10.7146/si.v4i4.121776 .

Graham J, North LA, Huijbens EH. Using mobile eye-tracking to inform the development of nature tourism destinations in Iceland. In: Rainoldi M, Jooss M, editors. Eye Tracking in Tourism. Cham: Springer International Publishing; 2020. p. 201–24.

Chapter   Google Scholar  

Han E. Integrating mobile eye-tracking and VSLAM for recording spatial gaze in works of art and architecture. Technol Arch Design. 2021;5(2):177–87. https://doi.org/10.1080/24751448.2021.1967058 .

Chadalavada RT, Andreasson H, Schindler M, Palm R, Lilienthal AJ. Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human–robot interaction. Robo Computer-Integrated Manufact. 2020;61: 101830. https://doi.org/10.1016/j.rcim.2019.101830 .

Campanaro DM, Landeschi G. Re-viewing Pompeian domestic space through combined virtual reality-based eye tracking and 3D GIS. Antiquity. 2022;96:479–86.

Rusnak M, Fikus W, Szewczyk J. How do observers perceive the depth of a Gothic cathedral interior along with the change of its proportions? Eye tracking survey. Architectus. 2018;53:77–88.

Francuz P. Imagia. Towards a neurocognitive image theory. Lublin: Katolicki Uniwersytet Lubelski; 2019.

Walker F, Bucker B, Anderson N, Schreij D, Theeuwes J. Looking at paintings in the Vincent Van Gogh Museum: eye movement patterns of children and adults. PLoS ONE. 2017. https://doi.org/10.1371/journal.pone.0178912 .

Mitrovic A, Hegelmaier LM, Leder H, Pelowski M. Does beauty capture the eye, even if it’s not (overtly) adaptive? A comparative eye-tracking study of spontaneous attention and visual preference with VAST abstract art. Acta Physiol (Oxf). 2020;1(209): 103133. https://doi.org/10.1016/j.actpsy.2020.103133 .

Jankowski T, Francuz P, Oleś P, Chmielnicka-Kuter E, Augustynowicz P. The Effect of painting beauty on eye movements. Adv Cogn Psychol. 2020;16(3):213–27. https://doi.org/10.5709/acp-0298-4 .

Ferretti G, Marchi F. Visual attention in pictorial perception. Synthese. 2021;199(1):2077–101. https://doi.org/10.1007/s11229-020-02873-z .

Coburn A, Vartanian O, Chatterjee A. Buildings, beauty, and the brain: a neuroscience of architectural experience. J Cogn Neurosci. 2017;29(9):1521–31. https://doi.org/10.1162/jocn_a_01146 .

Al-Showarah S, Al-Jawad N, Sellahewa H. Effects of user age on smartphone and tablet use, measured with an eye-tracker via fixation duration, scan-path duration, and saccades proportion. In: Al-Showarah S, Al-Jawad N, Sellahewa H, editors. Universal access in human-computer interaction universal access to information and knowledge: 8th international conference, UAHCI 2014, Held as Part of HCI International 2014, Heraklion, Crete, Greece, June 22-27, 2014, Proceedings, Part II. Cham: Springer; 2014. p. 3–14.

Todorović D. Geometric and perceptual effects of the location of the observer vantage point for linear-perspective images. Perception. 2005;34(5):521–44. https://doi.org/10.1068/p5225 .

Itti L, Koch C. Computational modelling of visual attention. Nat Rev Neurosci. 2001;2(3):194–203. https://doi.org/10.1038/35058500 .

Redi J, Liu H, Zunino R, Heynderickx I. Interactions of visual attention and quality perception. ProcSPIE. 2011. https://doi.org/10.1117/12.876712 .

Rusnak M. Eye-tracking support for architects, conservators, and museologists. Anastylosis as pretext for research and discussion. Herit Sci. 2021;9(1):81. https://doi.org/10.1186/s40494-021-00548-7 .

Clay V, König P, König SU. Eye tracking in virtual reality. J Eye Mov Res. 2019. https://doi.org/10.16910/jemr.12.1.3 .

Brielmann AA, Buras NH, Salingaros NA, Taylor RP. What happens in your brain when you walk down the street? implications of architectural proportions, biophilia, and fractal geometry for urban science. Urban Sci. 2022; 6(1):3  https://doi.org/10.3390/urbansci6010003 .

Böhme G. Atmospheric architectures: the aesthetics of felt spaces brings. Engels-Schwarzpaul T, editor. London, Oxford, New York: Bloomsbury; 2017.

Panofsky E. Architecture gothique et pensée scolastique précédé de L’abbé Suger de Saint-Denis, Les Edition de Minuit, Alençon . Alençon : Les Edition de Minuit; 1992.

Henry-Claude M. SL, ZY,. Henry-Claude M., Stefanon L., Zaballos Y., Principes et éléments de l’architecture religieuse médievale, . Gavaudun : Fragile; 1997.

Scot RA. The Gothic Enterprise. Guide to understand the Medieval Cethedral. California: University of California Press; 2003.

Erlande-Brandenburg A. MBAB,. Histoire de l’architecture Française. Du moyen Age à la Reinaissance: IVe siècle–début XVIe siècle, Caisse nationale des monuments historiques et des sites. Paris : Mengès; 1995.

Norman E. The house of god. Goring by the sea, Sussex: Thames & Hudson; 1978.

Duby G, Levieux E. The age of the cathedrals: art and society, 980–1420. Chicago: University of Chicago Press; 1983.

Rusnak M, Chmielewski P, Szewczyk J. Changes in the perception of a presbytery with a different nave length: funnel church in eye tracking research. Architectus. 2019;2:73–83.

Zhang L, Jeng T, Zhang RX. Integration of virtual reality, 3-D eye-tracking, and protocol analysis for re-designing street space. In: Alhadidi S, Crolla K, Huang W, Janssen P, Fukuda T, editors. CAADRIA 2018 - 23rd international conference on computer-aided architectural design research in Asia learning, prototyping and adapting. 2018.

Zhang RX, Zhang LM. Panoramic visual perception and identification of architectural cityscape elements in a virtual-reality environment. Futur Gener Comput Syst. 2021;118:107–17. https://doi.org/10.1016/j.future.2020.12.022 .

Crucq A. Viewing patterns and perspectival paintings: an eye-tracking study on the effect of the vanishing point. J Eye Mov Res. 2021. https://doi.org/10.16910/jemr.13.2.15 .

Raffi F. Full Access to Cultural Spaces (FACS): mapping and evaluating museum access services using mobile eye-tracking technology. Ars Aeterna. 2017;9:18–38. https://doi.org/10.1515/aa-2017-0007 .

Mokatren M, Kuflik T, Shimshoni I. Exploring the potential of a mobile eye tracker as an intuitive indoor pointing device: a case study in cultural heritage. Futur Gener Comput Syst. 2018;81:528–41. https://doi.org/10.1016/j.future.2017.07.007 .

Jung YJ, Zimmerman HT, Pérez-Edgar K. A methodological case study with mobile eye-tracking of child interaction in a science museum. TechTrends. 2018;62(5):509–17. https://doi.org/10.1007/s11528-018-0310-9 .

Reitstätter L, Brinkmann H, Santini T, Specker E, Dare Z, Bakondi F, et al. The display makes a difference a mobile eye tracking study on the perception of art before and after a museum’s rearrangement. J Eye Mov Res. 2020. https://doi.org/10.16910/jemr.13.2.6 .

Rusnak M, Szewczyk J. Eye tracker as innovative conservation tool. Ideas for expanding range of research related to architectural and urban heritage. J Herit Conserv. 2018;54:25–35.

de la Fuente Suárez LA. Subjective experience and visual attention to a historic building: a real-world eye-tracking study. Front Architect Res. 2020;9(4):774–804. https://doi.org/10.1016/j.foar.2020.07.006 .

Rusnak M, Ramus E. With an eye tracker at the Warsaw Rising Museum: valorization of adaptation of historical interiors. J Herit Conserv. 2019;58:78–90.

Junker D, Nollen Ch. Mobile eyetracking in landscape architecture. Analysing behaviours and interactions in natural environments by the use of innovative visualizations. In: Proceeding of the international conference “Between Data and Science” Architecture, neuroscience and the digital worlds. 2017.

Kabaja B, Krupa M. Possibilities of using the eye tracking method for research on the historic architectonic space in the context of its perception by users (on the example of Rabka-Zdrój). Part 1. Preliminary remarks. J Herit Conserv. 2017;52:74–85.

Kiefer P, Giannopoulos I, Kremer D, Schlieder C, Martin R. Starting to get bored: An outdoor eye tracking study of tourists exploring a city. In: Eye Tracking Research and Applications Symposium (ETRA). 2014.

Karakas T, Yildiz D. Exploring the influence of the built environment on human experience through a neuroscience approach: a systematic review. Front Archit Res. 2020;9(1):236–47. https://doi.org/10.1016/j.foar.2019.10.005 .

Mohammadpour A, Karan E, Asadi S, Rothrock L. Measuring end-user satisfaction in the design of building projects using eye-tracking technology. Austin, Texas: American Society of Civil Engineers; 2015.

Book   Google Scholar  

Dupont L, Ooms K, Duchowski AT, Antrop M, van Eetvelde V. Investigating the visual exploration of the rural-urban gradient using eye-tracking. Spat Cogn Comput. 2017;17(1–2):65–88. https://doi.org/10.1080/13875868.2016.1226837 .

Holmqvist K, Nyström M, Mulvey F. Eye tracker data quality: what it is and how to measure it. in: proceedings of the symposium on eye tracking research and applications. New York. Association for computing machinery; 2012. p. 45–52. (ETRA ’12). https://doi.org/10.1145/2168556.2168563

Foulsham T, Walker E, Kingstone A. The where, what and when of gaze allocation in the lab and the natural environment. Vision Res. 2011;51(17):1920–31. https://doi.org/10.1016/j.visres.2011.07.002 .

Gulhan D, Durant S, Zanker JM. Similarity of gaze patterns across physical and virtual versions of an installation artwork. Sci Rep. 2021;11(1):18913. https://doi.org/10.1038/s41598-021-91904-x .

van Herpen E, van den Broek E, van Trijp HCM, Yu T. Can a virtual supermarket bring realism into the lab? Comparing shopping behavior using virtual and pictorial store representations to behavior in a physical Store. Appetite; 2016;107,196-207.  https://doi.org/10.1016/j.appet.2016.07.033 .

Haskins AJ, Mentch J, Botch TL, Robertson CE. Active vision in immersive, 360° real-world environments. Sci Rep. 2020;10: art. nr 14304. https://doi.org/10.1038/s41598-020-71125-4 .

Hayhoe MM, Shrivastava A, Mruczek R, Pelz JB. Visual memory and motor planning in a natural task. J Vis. 2003;3(1):49–63. https://doi.org/10.1167/3.1.6 .

https://www.tobiipro.com/ . (accessed 10.05.2022)

Francuz P, Zaniewski I, Augustynowicz P, Kopí N, Jankowski T, Jacobs AM, et al. Eye movement correlates of expertise in visual arts. Front Hum Neurosci. 2018;12:87. https://doi.org/10.3389/fnhum.2018.00087 .

Koide N, Kubo T, Nishida S, Shibata T, Ikeda K. Art Expertise reduces influence of visual salience on fixation in viewing abstract-paintings. PLoS ONE. 2015;10(2): e0117696. https://doi.org/10.1371/journal.pone.0117696 .

Pangilinan E, LS& M v. Creating augmented and virtual realities: theory and practice for next-generation spatial computing. sebastopol: O’Reilly Media; 2019.

Tatler BW, Wade NJ, Kwan H, Findlay JM, Velichkovsky BM. Yarbus, eye movements, and vision. Iperception. 2010;1(1):382.

Galley N, Betz D, Biniossek C. Fixation durations—why are they so highly variable? . In: Advances in visual perception research. Thomas Heinen. Hildesheim: Nova Biomedicaal; 2015. p. 83–106.

Stevenson N, Guo K. Image valence modulates the processing of low-resolution affective natural scenes. Perception. 2020;49(10):1057–68. https://doi.org/10.1177/0301006620957213 .

Thompson WB, Willemsen P, Gooch AA, Creem-Regehr SH, Loomis JM, Beall AC. Does the quality of the computer graphics matter when judging distances in visually immersive environments? Presence Teleoperators Virtual Environ. 2004;13(5):560–71. https://doi.org/10.1162/1054746042545292 .

Choudhary Z, Gottsacker M, Kim K, Schubert R, Stefanucci J, Bruder G, et al. Revisiting distance perception with scaled embodied cues in social virtual reality. In: Proceedings—2021 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2021. 2021. p. 788–97. https://doi.org/10.1109/VR50410.2021.00106

Jamiy FE, Ramaseri CAN, Marsh R. Distance accuracy of real environments in virtual reality head-mounted displays. In: 2020 IEEE international conference on electro information technology (EIT). 2020. p. 281–7. https://doi.org/10.1109/EIT48999.2020.9208300

McGill M, Kehoe A, Freeman E, Brewster SA. Expanding the bounds of seated virtual workspaces. ACM Trans Comput-Hum Interact (TOCHI). 2020;27:1–40. https://doi.org/10.1145/3380959 .

Mon-Williams M, Plooy A, Burgess-Limerick R, Wann J. Gaze angle: a possible mechanism of visual stress in virtual reality headsets. Ergonomics. 1998;41(3):280–5. https://doi.org/10.1080/001401398187035 .

Nuthmann A, Smith T, Engbert R, Henderson J. CRISP: a computational model of fixation durations in scene viewing. Psychol Rev. 2010;117(2):382–405. https://doi.org/10.1037/a0018924 .

Download references

Acknowledgements

Cooperation: Hardware consultation: Ewa Ramus (Neuro Device, Warszawa, Poland), graphic cooperation: Wojciech Fikus (Wroclaw University of Science and Technology, Poland), data collecting: Mateusz Rabiega (Wroclaw University of Science and Technology, Poland), Magdalena Nawrocka (Wroclaw University of Science and Technology, Poland), Małgorzata Cieślik (independent, Wrocław, Poland), Daria Kruczek (independent, London, UK). The author would like to thank Ewa Ramus, Neuro Device and the technology company Tobii for the opportunity to develop an initial prototype of VR eye-tracking implementation. It is worth mentioning the support of Andrzej Żak. Thank you Marcin!

Human participation statement: WUST Ethical Committee approved the experiments and its protocols. Informed consent was obtained from all participants.

This work was supported by the National Science Center, Poland (NCN) under Miniatura Grant [2021/05/X/ST8/00595].

Author information

Authors and affiliations.

Wrocław University of Science and Technology, Bolesława Prusa 53/55, 53-317, Wrocław, Poland

Marta Rusnak

You can also search for this author in PubMed   Google Scholar

Contributions

MR- methodology, experiment design, data acquisition and analysis, writing. Author read and approved the final manuscript.

Corresponding author

Correspondence to Marta Rusnak .

Ethics declarations

Competing interests.

The author declares no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Rusnak, M. 2D and 3D representation of objects in architectural and heritage studies: in search of gaze pattern similarities. Herit Sci 10 , 86 (2022). https://doi.org/10.1186/s40494-022-00728-z

Download citation

Received : 28 April 2022

Accepted : 01 June 2022

Published : 16 June 2022

DOI : https://doi.org/10.1186/s40494-022-00728-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Eye tracker
  • Methodology
  • Flat and spherical stimulus

2d representation of 3d objects

Corbettmaths

3D Shapes Practice Questions

Click here for questions, click here for answers.

GCSE Revision Cards

2d representation of 3d objects

5-a-day Workbooks

2d representation of 3d objects

Primary Study Cards

2d representation of 3d objects

Privacy Policy

Terms and Conditions

Corbettmaths © 2012 – 2024

IMAGES

  1. 10 V3 2D Representation of 3D Shapes

    2d representation of 3d objects

  2. 2D Shapes and 3D Objects • Teacha!

    2d representation of 3d objects

  3. Visualisation of 3D Shapes in 2D: Concepts, Videos and Solved Examples

    2d representation of 3d objects

  4. 2D representations of 3D shapes Grade E Level 5 by whidds

    2d representation of 3d objects

  5. 24 2D Representations of 3D Shapes

    2d representation of 3d objects

  6. 2D to 3D: Working with shapes and representations

    2d representation of 3d objects

VIDEO

  1. 3D vs 2D drawing

  2. Viewport and projection Transform of 2D -3D objects in OpenGL

  3. 2D representation of a 3D cube!

  4. Year 8 Maths Term 5 Lesson 8

  5. representation of 3D in 2D(10/2/21) / class 7

  6. 2D to 3D Conversion

COMMENTS

  1. 2D and 3D shapes

    2D representation of 3D shapes. When architects design buildings, they often sketch 2D drawings to show what the building will look like from each side. These drawings are called plans and ...

  2. 2D & 3D Shapes

    2D shapes have only length and width. They can be created using a flat coordinate graph, with only the x- and y- axes. Two-dimensional shapes are representations on maps or photographs. 3D shapes ...

  3. 2D and 3D Shapes

    2D Shapes. In geometry, a shape or a figure that has a length and a breadth is a 2D shape. In other words, a plane object that has only length and breadth is 2 dimensional. Straight or curved lines make up the sides of this shape. Also, these figures can have any number of sides. In general, plane figures made of lines are known as polygons.

  4. Visualisation of 3D Objects in 2D

    There are different views of 3D shapes such as top view and side view. For example, the top view of a cylinder will be a circle but its side view will be a rectangle. Isometric Sketch. Isometric sketch of any shape is a 3D projection of that shape and all its lines are parallel to the major axis. Also, the lines are measurable.

  5. 2D Representations of 3D Objects ‹ OpenCurriculum

    Objectives. To help you assess how well students are able to visualize two-dimensional cross-sections of representations of three-dimensional objects. In particular, the lesson will help you identify and help students who have difficulties recognizing and drawing two-dimensional cross-sections at different points along a plane of a ...

  6. Difference between 2D and 3D Shapes (with Comparison Chart and Example

    The prior difference between the 2D and 3D shapes is that in 2D shapes only two axis are incorporated x and y-axis. On the other hand, in 3D shapes, the three axis x, y and z-axis are covered. Square, circle, triangle, rectangle are the various 2D shapes while cube, sphere, cuboid, are the 3D shapes. The plan in engineering drawing, which ...

  7. 2D to 3D: Working with shapes and representations

    2D to 3D: Working with shapes and representations. The Ministry is migrating nzmaths content to Tāhurangi. Relevant and up-to-date teaching resources are being moved to Tāhūrangi (tahurangi.education.govt.nz). When all identified resources have been successfully moved, this website will close. We expect this to be in June 2024.

  8. 2D Representations of 3D Objects

    Teacher guide 2D Representations of 3D Objects T-6. Collaborative activity (30Projector Resources minutes) Organize the class into groups of two or three students. Give each group the cut-up cards Flowing Water, Shape of the Surface of the Water 1 and2, and a large sheet of paper for making a poster.

  9. AQA All About Maths

    Specification references (in recommended teaching order) Specification content. Specification notes. G13. Construct and interpret plans and elevations of 3D shapes. The subject content (above) matches that set out in the Department for Education's Mathematics GCSE subject content and assessment objectives document. The expectation is that:

  10. 2D Representations of 3D Shapes; Drawing and Constructing Shapes ...

    Specification references (in recommended teaching order) Use 2D representations of 3D shapes. Measure and draw lines and angles. Draw triangles and other 2D shapes using a ruler and protractor. Use straight edge and a pair of compasses to do constructions. Construct loci. Construct the graphs of simple loci.

  11. How a 3D object can be represented in 2D?

    In computer graphics, 3D objects are represented using techniques like polygonal meshes. These meshes consist of a set of polygons and lines that approximate the surface of the 3D object. This representation allows for rendering and manipulation of the object in a 3D virtual space. In conclusion, representing a 3D object in 2D can be achieved ...

  12. GCSE Maths

    This video covers how to figure out what 3D shapes look like from the front, the side and the top. We call these 2D projections of 3D shapes. This video is s...

  13. 2D Representations of 3D Shapes Exam Questions. Revise for ...

    Covers: L2.20. Understand and use common 2-D representations of 3-D objects For more videos, questions and answers, visit https://marsmaths.com/To get access...

  14. Representing 3D objects in 2D drawings

    Using a large sheet of paper placed on a desktop, draw a grid of squares. 10cm x 10cm squares are a good size. Arrange the objects at different locations on the grid. Take digital photographs with the grid at lens level. Capture views from all four compass points. Place the grid on the mat or on a central table.

  15. (PDF) Spatial reasoning skills about 2D representations of 3D

    Given the important role played by students' spatial reasoning skills, in this paper we analyse how students use these skills to solve problems involving 2D representations of 3D geometrical shapes.

  16. From Pixels to 3D Shapes: An Overview of 3D Data Representations

    e) Multi-view data:- 3D data can be represented by combining several 2D images captured from different viewpoints of the same object. Think of it like taking pictures of an object from various ...

  17. PDF Overview of 3D Object Representations

    3D Representations provide the foundations for. Computer Graphics, Computer-Aided Geometric Design, Visualization, Robotics. They are languages for describing geometry. Semantics Syntax. values data structures operations algorithms. Data structures determine algorithms!

  18. Representing 3D Objects in 2D

    Collaborative activity. Projector Resources (30 minutes) 2D Representations of 3D Objects. P-13. Organize the class into groups of two or three students. Give each group the cut-up cards Flowing Water, Shape of the Surface of the Water 1 and 2, and a large sheet of paper for making a poster.

  19. Year 5 Space: 3D Objects from 2D Representations Lesson 1

    A 2D net is a representation of a 3D object. A net shows what a 3D object would look like if it were opened out flat. A 3D object has three dimensions. They have faces (sides), edges and vertices (corners). Some 3D objects include a sphere, cylinder, cone, cube, pyramid and cuboid.

  20. 3D Object Representations

    Discussion: 3D Object Representations: So far we have discussed only the 2D aspects of Computer Graphics. Let's focus on the 3D part of Computer Graphics. In short 3D is simply an extension of 2D with an extra dimension called the 'depth', added. In reality the objects in the world are 3D, and thus 2D is only a hypothetic case.

  21. 2D and 3D representation of objects in architectural and heritage

    Marta Rusnak, Problem of 2D representation of 3D objects in architectural and heritage studies. Re-analysis of phenomenon of longitudinal church, RepOd V1; A lack of consent from all participants means other eye-tracking data are not publicly available. In line with the applicable legal and ethical principles, such a right belonged to volunteers.

  22. 2D and 3D Drawings ALL Flashcards

    a 2d representation of a 3d object. click the card to flip 👆 ... how many principal views can be projected from a 3d object? projected. in the glass box method, the faces and edges of the object are _____ onto the sides of the box. front. identify the principal view. width and height.

  23. 3D Shapes Practice Questions

    Practice Questions. The Corbettmaths Practice Questions and Answers on 3D Shapes.

  24. Food Portion Estimation via 3D Object Scaling

    Image-based methods to analyze food images have alleviated the user burden and biases associated with traditional methods. However, accurate portion estimation remains a major challenge due to the loss of 3D information in the 2D representation of foods captured by smartphone cameras or wearable devices. In this paper, we propose a new framework to estimate both food volume and energy from 2D ...