VisualBMI shows you what weight looks like on a human body. Using a large index of photos of men and women, you can get sense of what people look like at different weights or even the same weight. If you find this website useful, please share it or send a note.

Huge thanks to all those who share their stories and photos on reddit. This website would not be possible without them. These images are neither hosted by or affiliated with VisualBMI. These images were indexed from reddit posts that link to imgur. The subjects of these photos are not involved with VisualBMI in any way.

If this site was useful, bookmark and share with your friends.

  • Weight Loss

This Is How Much Weight People Have Lost — You Have to See This Visual!

Updated on 7/29/2017 at 7:45 PM

visual representation weight loss comparison to objects

When you throw around a number, like "I've lost 20 pounds," or "I've dropped 50 pounds," it's hard to visualize exactly what that means. Check out these Instagram photos showing the equivalent of pounds lost compared to basic everyday objects. This is a great idea for staying motivated, to celebrate your victories, and to keep you inspired to reach your weight-loss goals.

Weight in Dog Food = 102 Pounds

Weight in dumbbells = 60 pounds, weight in cinder blocks = 90 pounds, weight in water jugs = 32 pounds, weight in a microwave = 36 pounds, weight in medicine balls = 94 pounds, weight in a tire = 21 pounds, weight in dog food = 60 pounds.

3dbodyvisualizer.com

3D Body Visualizer

We know that no amount of reassurance can ever be enough for your body, weight, or height. This is why we brought you a solution enabling you to see yourself and judge your looks without overthinking or self-doubting. Whether you are a man or a woman, our tool can help you set personal goals, compare progress, and help you understand your fitness journey.

Through the power of visualization, our interactive 3D human modeling simulator allows you to create personalized virtual representations of yourself. You can fine-tune your height, weight, gender, and shape; your on-screen figure will update to represent the details you’ve entered.

Our website is the perfect tool for those looking to gain or lose weight. You can see how you appear in real life or experiment with different visual features, such as muscle definition or fat. Don’t want to become too muscular? See what you would look like after gaining more muscles. Don’t want to become too fat? Well, you can check that out too! Everything is baked right into our user-friendly interface.

How To Use 3D Visualizer

In our tool’s user-friendly interface, you will see different factors and measurements that you can modify about your 3D character. Above the character, you’ll see two options. The first one lets you modify the gender between Male, Female, and Trained Male. The second option lets you change the measuring units between CM/KG and In/LBS.

You can find the most important buttons at the bottom of the screen, where you will see sliders to input your weight and height. The sliders are handy because they allow you to gradually update both factors and see how your appearance is affected by the slow change. It is important to note that we have designed these factors to change with each other by default, as they do in real life. However, you can change this by fixing one of the factors in place.

Pressing the “Fixed” button on the right on either height or weight allows you to freeze it while you freely change the other. Now, changing one factor won’t affect the other.

How Accurate Is the Body Visualizer?

We aim to create the most accurate visualizer on the internet, enabling our users to see only the most accurate image of themselves. We have designed our 3D Visualization tool to give approximate visualization based on average body parameters.

How We Plan To Improve

We are always trying and planning to improve our Visualizer by adding more features. So far, we have successfully added some of our planned features, while others are still in the works, including leg length, hip length, chest length, waist length, breast size, and muscularity. The addition of more factors allows you to get a better representation of your body.

So, try out our 3D Body Visualizer and let us know if you found it useful!

3.1/5 - (251 votes)

12 thoughts on “3D Body Visualizer”

I love this new site. I like the way you show animated images instead of just static images. I assume this is a work in progress, so I have a few suggestions. I’d like to see in addition to the basic height and weight entries options for basic measurements such as chest (bust), waist and hips. Also, I’d like to see an option for including in the same image multiple subjects (for example, male and female together or up to five subjects in the same image.

I agree it sounds ike good suggestions

Nice good for visualising how I will look after weight loss

Can you add any more features? Like chest size, waist size, bicep size, etc.?

I agree with the previous comments; there needs to be measurements for various parts of the body. On top of that, I’d like to see more definition in the human anatomy. For example, if you were to scale down the weight to lower levels, you would see the ribs and hip bones become more visible. Or perhaps you could add a scale to increase muscle tone, as two people with a similar body mass would look fairly different from each other if one worked out regularly and one didn’t. Overall, I enjoy what you’re build here, but I feel like you could add a bit more realism and variation to the final product

the weight it not accurate at all. it looks 25 pounds heavier than it should be.

It would be good to put in some variables for skeletal structure, particularly thngs like leg length – the model looks nothing like me, if my legs were proportioned similarly I’d be 25-30 cm taller! As for shoulders/ribcage – some of us are almost as wide as we’re tall!

Its fun but add customization

I know its bot going to be accurate, but i dont use this for my own body. I make characters and i wanna know how they’d look, and it’s been good for me so far.

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

  • Search Please fill out this field.
  • Newsletters
  • Sweepstakes
  • Special Diets

Food Serving Sizes: A Visual Guide

Find out how everyday objects can ease the guessing game of serving sizes and portion control.

Figuring Out Portion Sizes

What you eat is important, especially when it comes to making positive food choices, but how much you eat is the real brainteaser of healthy eating. When you look at the oversize food portions, ranging from the diameter of bagels to mounds of pasta, translating a serving size into portions is a big challenge in a more-is-better world.

Don't Miss: Desserts and Sweets for Diabetics

The first step is knowing the difference between a portion and a serving size. A serving size is a recommended standard measurement of food. A portion is how much food you eat, which could consist of multiple servings.

Visually comparing a serving size to an everyday object you have at home, such as a baseball or a shot glass, can be helpful in identifying what a serving size looks like without carting around a scale and measuring cups for every meal and snack. Here are some general guidelines for the number of daily servings from each food group*:

  • Grains and starchy vegetables: 6-11 servings a day
  • Nonstarchy vegetables: 3-5 servings a day
  • Dairy: 2-4 servings a day
  • Lean meats and meat substitutes: 4-6 ounces a day or 4-6 one-ounce servings a day
  • Fruit: 2-3 servings a day
  • Fats, oils, and sweets: Eat sparingly

*Check with your doctor or dietitian to determine the appropriate daily recommendations for you.

Whole Grain Bread

1 serving = 1 slice

A slice of bread is proportional to the size of one DVD disc.

1 serving = 1 teaspoon

One small pat of butter is equal to one serving size.

1 serving = 1/2 cup

A serving-size side of green peas is equal to half of a baseball.

Air-Popped or Light Microwave Popcorn

1 serving = 3 cups

Snack away on the healthier varieties of popcorn and enjoy a serving size of three baseballs.

Baked Potato or Sweet Potato

Choose a potato the size of a computer mouse.

Salad Greens

1 serving = 1 cup

When making your perfect salad, the serving of greens should be the size of one baseball.

Reduced-Fat Salad Dressing

1 serving = 1/4 cup

Top your salad with one golf-ball size serving of dressing.

Peanut Butter

1 serving = 1 tablespoon

You're doing great if your peanut butter serving fits into half of a 1-ounce shot glass.

1 serving = 1 ounce

A bagel the size of half of a baseball is equal to one serving.

1 serving = 1/3 cup cooked

A serving of pasta is roughly the same size as one tennis ball.

Olive oil is a great alternative to butter, but remember to keep the serving size similar to one pat of butter.

Canned Fruit

Canned fruit in light juices is equal to half of a baseball.

Baked French Fries

1 serving = 1 cup + 1 teaspoon of canola or olive oil

A serving of French fries looks like the equivalent of one baseball. Don't forget to account for the teaspoon of oil.

Shredded Cheese

1 serving = 2 tablespoons

Toss your salad or taco with a serving of shredded cheese equal to one 1-ounce shot glass.

Enjoy a serving of steamed broccoli that's the size of half of a baseball.

100-Percent Orange or Apple Juice

1 serving = 4 ounces or 1/2 cup

A fun-size juice box is the serving size you should aim for. Another way to think about it: an average woman's fist resting on its side.

1 serving = 1 medium apple

Pick an apple about the same size as one baseball.

1 serving = 3 ounces cooked

A serving of fish will have the thickness and length of a checkbook.

Reduced-Fat Mayonnaise

If you go for reduced-fat mayo, fill half of a 1-ounce shot glass for a serving.

Low-Fat Block Cheese

Keep your serving size of hard cheese to the equivalent of three dice.

Chicken, Beef, Pork, or Turkey

When cooking lean meat, choose a serving the size of one deck of cards.

Nonfat or Low-Fat Milk

A serving looks like the small 8-ounce carton of milk you loved in school.

1 serving size = 2 cookies

Cookies shouldn't be monster-size. Think Oreos for a good measure of comparison.

Scoop out the creamy dessert to equal half of a baseball.

If you must have your candy, a serving equals one 1-ounce shot glass.

Related Articles

Logo_v1

  • CALCULATE BMI
  • NON-WEBGL BMI
  • Enable WebGL

Calculate your BMI and Visualize your Body Shape

  • © 2013, Copyright Max Planck Gesellschaft
  • Privacy Policy
  • Terms of Use

visual representation weight loss comparison to objects

7 Fun Ways to Visually Track Weight Loss Progress

  • Supplements
  • Weight Loss

Compare Your Weight Loss Results to Objects to See Progress

Compare Your Weight Loss Results to Objects to See Progress

16 Dec 2018

We’ve all been disappointed by our weight loss results at some point in time. Maybe you’ve eaten super-healthily all week but the scales don’t reflect your efforts. Or you’ve hit the gym hard and done a few back-to-back classes but aren’t noticing any difference in your dress size. This kind of situation can be really demotivating and can lead to a downward spiral if left unchecked. But there are lots of ways to switch up your mindset and turn your disappointment into meaningful progress. In this article, we look at one way to do this – using everyday household objects.

Using household objects as weight loss motivation

If you’re trying really hard to lose weight, it can be really disappointing when you don’t see results. Dieters can get deflated when they fail to see the weight loss they’ve achieved reflected in photos or better fitting clothes. This can be even tougher if you’re trying to lose a significant amount of weight. We tend to lose weight equally from all parts of the body which means that it can take a while to notice results. Although the total amount lost might be a lot, it translates to a small amount from each part of the body, which means the changes are often less apparent than we’d like.

Another issue is that we’re simply less able to notice changes in ourselves. We see our reflection in the mirror every day so it’s hard to see small changes that happen gradually. It’s often the case that other people will notice our weight loss before we really see it for ourselves. This is because they see us less often, therefore, the changes are more noticeable to them. As a general rule of thumb, in:

  • 1-2 weeks – you’ll start to feel better
  • 4-6 weeks – other people will notice that you’ve lost weight
  • 8-12 weeks – you’ll start to notice the weight loss for yourself

If you’re feeling disappointed with your progress, it can be helpful to look at it objectively. This is where household objects can be a useful and fun way to reflect on your progress. You might not feel like you’ve lost a significant amount of weight but when you see it’s the equivalent of a box of cereal or kitchen sink it’ll seem very different.

Weight loss comparison to objects

Let’s look at how different amounts of weight loss translate into various household items.

2lb – A large bag of sugar

2lb (1kg) of body fat takes up around 1000 cubic centimeters or just over 4 cups in volume. That’s quite a lot! Losing this amount of fat is the equivalent of a losing a large bag of sugar – and that’s no small feat! Carrying this seemingly harmless extra weight can make everything harder, from getting out of bed to walking down the street. So by losing it, you’ll start to notice everyday activities becoming that much easier.

8lb – Your head

Your head, complete with brain weighs around 8lb. Imagine how much lighter you’d feel without it on your shoulders! Granted, you kind of need your head, but if this weight loss is down to pure fat loss, you’ll certainly be feeling a lot lighter!

13lb – An obese cat

visual representation weight loss comparison to objects

The average pet cat should weigh around 10lbs (depending on the breed) but an obese cat can weigh upwards of 13lbs. Although some people may find losing this weight noticeable, others may not – it’ll depend on your frame and starting weight. But just imagine carrying an obese cat around in a backpack all day (with air holes of course!). Just getting up off the sofa with a Garfield-sized cat would be 10x harder, let alone trying to go for a run. So if you’ve lost 13+ pounds but don’t see it as a significant amount then think again. You’re freeing up your body for more active pursuits that’ll further contribute to your weight loss.

15lb – A vacuum cleaner

Vacuum cleaners weigh around 15lbs (7kg) on average, so losing this amount of weight is truly impressive. We all know how difficult it is to carry one up a flight of stairs! And if you’re not quite there yet, imagine how much easier everything will be once you have shed this weight.

55lb – A poodle

Although poodles come in lots of shapes and sizes, they weigh around 55lbs (25kg) on average. This is the equivalent of two obese cats or 25 bags of sugar! By losing this amount of weight you’ll not only decrease your waistline, but significantly reduce your risk of chronic diseases too.

175lb – A washing machine

visual representation weight loss comparison to objects

A typical washing machine weighs around 175lbs (79kg) and takes up around 4 cubic feet. Taking your BMI from very obese down to a healthy range may involve losing this amount of weight for some people. It’ll take time but the health benefits will be well worth it (and we’re confident that you’ll definitely notice the difference!).

How much have you lost so far?

It’s easy to think 2lb is no great loss, but now you have some practical objects you can benchmark your weight loss against, you can really begin to appreciate how much you’ve achieved!

It’s also important to take stock of the other health benefits you’re benefiting from. Do you have more energy or find it easier to walk up flights of stairs? Has your skin become clearer or your brain less foggy? Perhaps you’re sleeping better as a result of consuming less sugar and becoming more active. Whatever stage you’re at in your weight loss journey, there will be side-benefits to your health that you might not have realized are connected. Be confident that you’re making progress and that every positive decision you make is getting you closer to your end goal.

Check also: Top 10 dieting tips for those mega busy days >>

Still feeling frustrated by your weight loss progress?

visual representation weight loss comparison to objects

It also contains thermogenic ingredients that enable your body to burn fat faster and more efficiently. So, as well as decreasing your calorie intake, it’ll help you burn off stored body fat. You can learn more about how PhenQ can help here .

Which household object matches your weight loss so far? We’d love to hear about your progress in the comments below!

20 minutes of exercise

Styku_Logo_2020_Horz_White_TranspBG

  • 3D Body Scanner
  • Health Clubs and Personal Training
  • Body Composition Analysis
  • Medical Aesthetics and Body Contouring
  • Case Studies - Trainers
  • Case Studies - Members
  • Testimonials
  • Global Distributors
  • Create an Account

Weight Scales vs. 3D Body Scanning for Measuring Success

By Giselle Naranjo on 10/1/21 10:22 AM

visual representation weight loss comparison to objects

Gym owners have a challenge on their hands: how to measure the success of their weight loss programs and determine whether they are effective. The compelling idea is to use a modern approach that will give a more precise measurement of body composition rather than just weight loss or gain alone.

A traditional form of measurement for this purpose has been using weight scales, but weight scales have limitations. While these two methods can deliver valuable information about the progress of a fitness program, studies show that they come with different sets of advantages and disadvantages.

Keep reading to learn which one is right for your clients.

The Use of Weight Scales

The popularity of weight scales to measure body weight has been on a steady decline. As being overweight is now being recognized as having health risks, tools that can help accurately monitor body fat composition have become widely embraced by fitness professionals, personal trainers, and their clients.

The downside of weight scales is that they just give information on how much a person weighs. They do not provide data on body fat percentage. Thus, while weight scales can inform an individual whether they are losing or gaining weight, they cannot be used to monitor the effectiveness of fat loss programs.

In addition, traditional weight scales do not provide a  measurement for fat distribution.

While weight scales are suitable for monitoring the amount of weight lost, they give no information on the types and amounts of fat that were actually reduced.

Benefits of 3D Body Scanning

For these reasons, more fitness professionals and health clubs across the world have been using 3D body scanning technology to monitor their client's progress.

3D body scanning is a scientifically proven method that helps measure body composition , including weight, lean muscle mass, and fat. This method can also help monitor changes in body fat composition over time.

This form of body composition measuring is considered the gold standard in the fitness community. It is also considered to be the most accurate form of measuring, with a margin of error of less than 2%.

A 3D body scanner works by taking horizontal, cross-sectional images of a person's body. It can then determine where fat is located and how much there is. The technology can show changes in more detail than weight scales since it enhances the accuracy by about 5 times compared to traditional methods.

This method gives more information on what types of tissue are more prevalent in an individual, especially after some time has passed following the initial scan when changes in lifestyle habits become evident.

For example, if an individual had just started working out, weight scales would not be able to give any indication that muscle mass was increasing or the fat was being lost. 3D body scanning, on the other hand, can provide this information.

New call-to-action

Accuracy of 3D Body Scanning

The accuracy of 3D body scanning can be attributed to the visual representation it offers. It shows not only weight loss but also the changes that are happening in various parts of the body due to the effects of fat reduction or muscle-building programs.

For example, a 3D body scan would show an increase in measurements for biceps, chest size, and legs.

Additionally, it could measure changes in other areas, including waist circumference, because this indicates shifting fat distribution, indicating an improved health condition.

Leverage Body Measurements

In comparison to weight scales, 3D body scanning technology is a much more effective way of tracking fat loss and gaining muscle.

When taken by a professional in a gym setting, this process creates an accurate representation of one's body weight and composition.

In fact, the results from this assessment can be used as a guide when designing workout plans or personalized programs based on what the client wants to achieve with their exercise routine.

Why 3D Body Scanning Is Becoming So Popular Among Fitness Professionals

The popularity of 3D body scanning is mainly due to the following:

  • It offers precise and accurate measurements, unlike weight scales.
  • It allows measuring goals like waist circumference reduction, lean muscle gain, etc.
  • Personalized plans can be developed based on the client's goals.

3D body scanning technology captures more than just one's weight; it provides information that will help determine the effectiveness of a fitness program and how clients progress under these programs.

Thus, this approach offers several benefits such as:

  • ability to predict future obesity issues
  • identification of problem areas which aids in prioritizing workouts 
  • understanding of lifestyle patterns for better weight management decisions
  • identifying possible health risks early so steps can be taken to reduce them.

Some experts believe that 3D body scanning technology is the future of fitness assessment. It's a no-brainer why professionals prefer this method to weight scales for helping their clients reach their goals and make healthier lifestyle choices.

3D Body Scanning vs. Weight Scales: Which is Better?

No doubt, 3D body scanning technology has proven to be more accurate than weight loss scales in measuring various aspects of one's life, including fat composition and lean muscle gain.

However, if you're still not convinced that this method can provide better results, here are several points to consider about why it might be time to switch from weight scales to 3D body scanning:

-Clients receive personalized plans based on their health, lifestyle and preferences instead of following the same weight loss program as everyone else.

- The assessment is non-intrusive because it uses infrared light technology (the same light in a t.v. remote control) to scan the body.

- 3D body scanners allow capturing measurements quickly and efficiently, so clients don't have to wait around long before receiving their results.

While both weight scales and 3D body scanning can provide valuable information about one's health status, this latest method of assessing fitness progress offers more benefits than just reading numbers on a scale which makes it very useful in helping professionals design programs that are efficient for their clients who are serious about losing weight and gaining muscle mass.

Tips for Using a 3D Body Scanner

Here are some tips to consider when using 3D body scanners for measuring your clients' progress:

- Make sure you explain the process clearly beforehand, so clients know what to expect.

- Allow time for them to change into workout clothing they will be wearing during their assessment.

- Ensure that all of their clothes are form-fitting since this makes it easier to analyze the accurate results.

- Explain what each measurement means and how it can help reach their weight loss goals more efficiently.

Remember, 3D body scanners are only helpful if they're used correctly and in the right direction.

Tips for Communicating 3D Body Scanning Results With Your Clients

There are many benefits to communicating 3D body scanning results with your clients. For instance, you can encourage them to reach their goals more efficiently by showing them the following:

- Areas in which they need to work harder through weight training or cardio exercises to improve their overall fitness levels.

- How much progress they have made since starting their program.

- How well you know them and how results might be different if they were working with someone else who was not as invested in their health.  

Before giving out results, explain that these numbers are only estimates based on the technology used during their assessment since no two bodies are alike. When offering constructive criticism, use wording like saying "these measurements indicate that..." rather than "you need to do this or that."

Going Beyond a Single Number

In the end, 3D body scanners are a great way for fitness professionals to help their clients make better decisions in reaching their weight loss and muscle gain goals. With the right direction and education in using these machines, you can expect them to bring more positive outcomes in your sessions together.

To get a more reliable body composition measurement, people must turn to 3D body scanning technology, which goes beyond checking weight loss progress.  

3D body scanning is seen as an upgrade from weight scale measurements because it is more precise in assessing changes in fitness levels over time. While both tools offer accountability and progress reports, which motivate people to continue working toward their goals, fitness professionals everywhere are already taking notice of how 3D body scanner technology can help their clients improve physical appearance and overall wellness.

At its core, 3D body scanning technology is an efficient tool that should be incorporated into your gym's fitness assessment process because it not only helps monitor changes in physique but also assesses specific areas of concern clients may have.

Written by Giselle Naranjo

Previous post.

7 Fitness Center Marketing Tips to Increase Sales

7 Fitness Center Marketing Tips to Increase Sales

visual representation weight loss comparison to objects

How a 3D Full Body Scanner Provides Value to Personal Training Programs

Styku_Logo_2020_Horz_TranspBG

Copyright © 2023 Styku LLC

  • Legal Stuff
  • Privacy Policy

ClickCease

Seen on Women's Health

  • +1 (888) 270-3240
  •   United Kingdom (GBP £)
  •   Canada (CAD $)
  •   Australia (AUD $)
  •   Ireland (EUR €)
  •   Français (EUR €)
  •   Deutsch (EUR)
  •   Italiano (EUR €)
  •   Español (EUR €)

9 out of 10 customers recommend us to their friends

based on 3,273 responses

customer satisfaction

Home » Blog » Weight Loss » Weight Loss Comparison to Objects

Weight Loss Comparison to Objects

visual representation weight loss comparison to objects

  • Words By PhenGold
  • Published January 11, 2021

It’s easy to get caught up with weight loss, and the wait to see results can feel neverending. Especially when you’ve been eating clean all week, only to feel deflated when you hit the scales.

Or maybe you’ve been powering through classes at the gym, yet you can’t quite see a drop in dress size just yet. This can feel pretty demotivating, especially when you know you’ve been working so hard. However, there are plenty of ways to turn this mindset around. But how? Well, compare weight loss to objects, that’s how.

Comparing weight loss to objects for ultimate motivation

When you’re putting all your efforts into losing weight, it can be super disappointing when you don’t see results. Many dieters don’t realise what they’ve achieved unless it’s instantly noticeable in photos or in a slimmer-fitting outfit. Despite the myths of exercises to burn specific areas of fat, our bodies tend to lose fat pretty equally all over.

Which means the results are less noticeable unless viewed overall.

Despite the total amount sitting in front of us to see, changes can feel less apparent when weight has been lost from all over the body. Plus, we’re all guilty of the inability to notice changes in ourselves. We see ourselves daily, whether it’s a quick glance in the mirror before work or getting ready for a shower – we’re too familiar with our body size.

More commonly, it’s other people that notice our weight loss before we do.

This is usually because they see us a little less – making weight changes more noticeable. Generally, weight loss follows this pattern:

  • In 1 to 2 weeks – you’ll begin to feel better
  • In 4 to 6 weeks – others will notice your weight loss
  • In 8 to 12 weeks – you’ll notice weight loss yourself

If you’re struggling to stay motivated, it can be beneficial to think of your weight loss objectively. This is where everyday household items can be a great way to reflect on your weight loss journey. You may not feel like you’ve lost a lot of weight, but when you start to consider your weight loss compared to objects, you’ll suddenly gain some perspective.

Weight loss compared to objects

Before you go putting your weight loss progress down, let’s take a look at a range of household items and compare your weight loss to these objects.

2lb – A big bag of sugar

While you may frown upon a 2lb weight loss as a small loss , 2lb (1kg) of body fat actually takes up around 1000 cubic centimeters – just over 4 cups in volume. So you can stop beating yourself up about a small loss! Losing 2lb is the equivalent of losing a big bag of sugar, and that’s definitely something to feel proud about. This just shows that what may seem like a harmless couple of pounds can actually make a huge difference to your everyday activities -by losing this weight, you’ll start to notice these benefits.

5lb – 2liter bottle of soda

The next time you’re in the store grabbing a drink, just remember that an innocent-looking 2liter bottle of soda weighs the same as your 5lb weight loss. So the next time you step off the scales and you’re looking at your body for visual results, just remember a big bottle of soda is a great comparison for your weight loss.

8lb – Your own head

While your head is pretty important – and unlosable – it’s a great measurement for weight loss. Your head, brain included. weighs around 8lbs and you carry it around with you every damn day! At the end of a long day, our heads can feel tired and heavy. So just think of that the next time you’re 8lb down and looking for something to compare your weight loss to.

13lb – A pretty chubby cat

While we won’t suggest fattening up your feline friend for a weight comparison, an obese cat weight upwards of 13lbs. While 13lbs is an incredible amount of weight to lose, your frame and starting weight can determine how quickly you begin to see this result in the mirror. Just imagine a weight belt consisting of your furry friend! That’s no small feat for sure. If you’re doubting a 13lb+ weight loss, think again, because we’re not sure you’d want to take your furry pal on your next run after losing that weight!

15lb – Vacuum cleaner

visual representation weight loss comparison to objects

When you’re vacuuming your home, you don’t really think about the weight you’re pushing around. With vacuum cleaners weighing around 15lbs (7kg), losing this weight is one hell of an achievement. The next time you’re feeling unsure about this amount of weight loss, trying trekking up a flight of stairs with your vacuum! Suddenly 15lbs begins to show.

33lb – 4 Gallons of water

With one gallon weighing in at approximately 8.35 pounds, you’ll think again about lugging four gallons of water around. Depending on your starting weight and frame, 33lb might not feel like enough just yet to see the results you’re aiming for. But if you jump back on that treadmill with 4 gallons of water strapped to your back – you’ll quickly realise it’s a massive achievement.

55lb – A poodle

Although a poodle may seem pretty delicate and dainty, they actually weigh around 55lbs (25kg) on average. Losing this significant amount of weight will benefit your heart, body and overall health. Think of losing 55lbs as your friendly poodle OR 11 bottles of your favourite 2liter soda. Crazy, huh?

75lb – 100 Cans of beer

This is a tremendous weight loss and one you’ll certainly notice. Losing 75lbs equals to around 100 cans of beer! If you’re ever doubting your weight loss, comparing 75lbs to this beer load can definitely get you feeling triumphant again. And of course, if you’re still feeling unsure, try a squat with 100 beers – then you definitely will!

100lb – 14,512 Tea bags

Now that’s a lot of tea! 100lbs is an incredible weight loss, and you’re definitely going to reap the benefits. Losing this amount will not only significantly decrease your waistline, but it’ll also help lower your BMI and reduce your risk of chronic diseases.

175lb – A washing machine

visual representation weight loss comparison to objects

If you’ve moved house a couple of times, you’ll know how heavy your typical washing machine can be. Yep, that’s right. The average washing machine can weigh around 175lb (79kg), taking up around 4 cubic feet. If you were previously hitting some heavy figures on the scales, losing this amount of weight can bring your BMI down to a healthy range and benefit your body in a whole world of ways.

It may take some time, but just think about your washing machine the next time you tally up your weight loss so far! Now that you’ll seriously see the difference with.

Your weight loss so far

It’s easy to get caught up thinking that a 2lb loss shouldn’t be something to be proud of. When you compare weight loss to objects, however, suddenly it’s clear how much you’ve achieved.

Alongside being able to shop for a new wardrobe or fit into your favourite outfit, there are also an abundance of health benefits that come with losing weight. Do you notice everyday tasks feel easier? Such as climbing stairs? Getting out of bed? Or carrying the shopping into your home? Maybe your skin has come clearer, your mind less foggy and your moods more relaxed.

You may notice that you have more energy and motivation – alongside the many health benefits, such as reducing your risk of chronic diseases. No matter what stage you’re at when it comes to your weight loss, it’s your motivation and celebration of weight loss wins that matters! When you compare your weight loss to everyday objects, suddenly it’s clear how much you’ve lost.

Looking to speed up your weight loss?

If you’re feeling a little frustrated with your weight loss journey, why not try PhenGold? Our fat burning ingredients are clinically proven to help you lose weight – safely.

PhenGold works by increasing your metabolism and suppressing your hunger, so you’ll quickly resist the urge to snack! When your body is full steam ahead with losing weight, you’ll be shedding the pounds in no time.

RELATED ARTICLES

visual representation weight loss comparison to objects

How to lose thigh fat and tone your legs

visual representation weight loss comparison to objects

How long does it take to notice weight loss?

visual representation weight loss comparison to objects

6 easy habits to help you lose weight

visual representation weight loss comparison to objects

Does counting calories work?

visual representation weight loss comparison to objects

How to calculate body fat %

visual representation weight loss comparison to objects

The 800 Calorie Diet Plan for Weight Loss – Does it Work?

Article categories.

  • Diet and Nutrition (25)
  • Health and Fitness (26)
  • Motivation (12)
  • Supplements (5)
  • Weight Loss (17)

Lose Weight Fast

visual representation weight loss comparison to objects

2 MONTHS SUPPLY + 1 MONTH FREE

  • Free & Fast Shipping

visual representation weight loss comparison to objects

Fall back in love with your goals

visual representation weight loss comparison to objects

How Colors Can Change Your Appetite and Eating Habits

visual representation weight loss comparison to objects

The Best Bodyweight Workout

visual representation weight loss comparison to objects

Phentermine Dosages for Weight Loss

visual representation weight loss comparison to objects

100 Positive Weight Loss Quotes to Keep You Motivated

visual representation weight loss comparison to objects

The Yoli Diet Review – Pros, Cons, and How It Works

Why phengold.

visual representation weight loss comparison to objects

Reduces Food Cravings & Increases Metabolism 

visual representation weight loss comparison to objects

100% Money-Back Guarantee

visual representation weight loss comparison to objects

Fast & Free Delivery

visual representation weight loss comparison to objects

Vegetarian Friendly Ingredients

Written by PhenGold

Written by PhenGold

We research and write articles about health, fitness and dieting. Each of our articles includes sources from scientific studies where possible.

Fed up with weight loss supplements that don’t work? It’s time to try PhenGold.

Made with clinically proven ingredients and backed by our money back guarantee.* save 20% instantly when you use the code " gold " at the checkout now, today only: save 20% extra when you use the code save20 at the checkout.

visual representation weight loss comparison to objects

1 MONTH SUPPLY  

  • Free & Fast Shipping​

visual representation weight loss comparison to objects

3 MONTHS SUPPLY + 2 MONTHS FREE

cards-desktop

”Within four weeks I dropped 9 pounds and lost 2 inches around my waist” Kate , US

“I felt I had more energy and better focus. For me, that was what I needed to keep going.” Tiffany , US

“For the first time in months, I can feel my confidence starting to return.” Anusha , US

Orders are shipped within 24 hours and arrive in as little as 1-3 business days*.

Buy with confidence. We offer a 100 day money back guarantee for your peace of mind.*

If you need product support our world class customer service team is available around the clock.

Secure Payments

visual representation weight loss comparison to objects

+all major credit cards

weight loss blog

  • Weight Loss
  • Diet and Nutrition
  • Health and Fitness
  • Supplements

COMPANY INFORMATION

  • Terms & Conditions

Subscribe & Save

  • Privacy Policy
  • Shipping Policy
  • Cancellation policy
  • Return & Refund Policy
  • Affiliate Network

© 2024 Health Nutrition Limited. All Rights Reserved. Registered in England and Wales, Company Number: 12295360

When you subscribe and save, you benefit from a 20% discount on your order.

You can change products, pause or cancel at any time in the shopping portal.

  • For a 1-bottle subscription , you will receive a shipment every 30 days.
  • For a 3-month supply subscription, shipments will go out every 90 days.
  • For a  5-month supply subscription, shipments will go out every 150 days.

Full subscription terms and conditions , as well as subscription FAQs , can be viewed here (opens in new window).

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 06 April 2021

Limits to visual representational correspondence between convolutional neural networks and the human brain

  • Yaoda Xu   ORCID: orcid.org/0000-0002-8697-314X 1 &
  • Maryam Vaziri-Pashkam 2  

Nature Communications volume  12 , Article number:  2065 ( 2021 ) Cite this article

12k Accesses

42 Citations

14 Altmetric

Metrics details

  • Neural decoding
  • Object vision

A Publisher Correction to this article was published on 06 May 2021

This article has been updated

Convolutional neural networks (CNNs) are increasingly used to model human vision due to their high object categorization capabilities and general correspondence with human brain responses. Here we evaluate the performance of 14 different CNNs compared with human fMRI responses to natural and artificial images using representational similarity analysis. Despite the presence of some CNN-brain correspondence and CNNs’ impressive ability to fully capture lower level visual representation of real-world objects, we show that CNNs do not fully capture higher level visual representations of real-world objects, nor those of artificial objects, either at lower or higher levels of visual representations. The latter is particularly critical, as the processing of both real-world and artificial visual stimuli engages the same neural circuits. We report similar results regardless of differences in CNN architecture, training, or the presence of recurrent processing. This indicates some fundamental differences exist in how the brain and CNNs represent visual information.

Similar content being viewed by others

visual representation weight loss comparison to objects

Highly accurate protein structure prediction with AlphaFold

John Jumper, Richard Evans, … Demis Hassabis

visual representation weight loss comparison to objects

Preparatory activity and the expansive null-space

Mark M. Churchland & Krishna V. Shenoy

visual representation weight loss comparison to objects

Real-time analysis of large-scale neuronal imaging enables closed-loop investigation of neural dynamics

Chun-Feng Shang, Yu-Fan Wang, … Jiu-Lin Du

Introduction

Recent hierarchical convolutional neural networks (CNNs) have achieved human-like object categorization performance 1 , 2 , 3 , 4 . It has additionally been shown that representations formed in lower and higher layers of the network track those of the human lower and higher visual processing regions, respectively 5 , 6 , 7 , 8 . Similar results have also been obtained in monkey neurophysiological studies 9 , 10 . CNNs incorporate the known architectures of the primate lower visual processing regions and then repeat this design motif multiple times. Although the detailed neural mechanisms governing high-level primate vision remain largely unknown, the brain–CNN correspondence has generated the excitement that perhaps the algorithms governing high-level vision would automatically emerge in CNNs to provide us with a shortcut to fully understand and model primate vision. Consequently, CNNs have been regarded by some as the current best models of primate vision (e.g., 11 , 12 ). So much so that it has recently become common practice in human functional magnetic resonance imaging (fMRI) studies to compare fMRI measures to CNN outputs (e.g., 13 , 14 , 15 ).

Here, we reevaluate the key fMRI finding showing that representations formed in lower and higher layers of the CNN could track those of the human lower and higher visual processing regions, respectively. Our goal here is neither to deny that CNNs can capture some aspects of brain responses better than previous models nor to enter a “glass half empty” vs. “glass half full” subjective debate. But rather, we aim to evaluate CNN modeling as a viable scientific method to understand primate vision and whether there are fundamental differences in visual processing between the brain and CNNs that would limit CNN modeling as a shortcut for understanding primate vision.

Two approaches have been previously used for establishing a close brain and CNN representation correspondence 5 , 6 , 7 , 8 . One approach has used linear transformation to link individual fMRI voxels to the units of CNN layers through training and cross-validation 6 , 7 . While this is a valid approach, it is computationally costly and requires large amounts of training data to map a large number of fMRI voxels to an even larger number of CNN units. The other approach has bypassed this direct voxel-to-unit mapping, and instead, has examined the correspondence in visual representational structures between the human brain and CNNs using representational similarity analysis (RSA 16 ). With this approach, both Khaligh-Razavi and Kriegeskorte 8 and Cichy et al. 5 reported a close correspondence in the representational structure of lower and higher human visual areas to lower and higher CNN layers, respectively. Khaligh-Razavi and Kriegeskorte 8 additionally showed that such correlations exceeded the noise ceiling for both brain regions, indicating that the representations formed in a CNN could fully capture those of human visual areas (but see ref. 17 ).

These human findings are somewhat at odds with results from neurophysiological studies showing that the current best CNNs can only capture about 50–60% of the explainable variance of macaque V4 and IT 9 , 10 , 18 , 19 . Khaligh-Razavi and Kriegeskorte 8 and Cichy et al. 5 were also underpowered by a number of factors, raising concerns regarding the robustness of their findings. Most importantly, none of the above fMRI studies tested altered real-world object images (such as images that have been filtered to contain only the high or low spatial frequency components). As human participants have no trouble recognizing such filtered real-world object images, it is critical to know if a brain–CNN correspondence exists for these filtered real-world object images. Decades of vision research has successfully utilized simple and artificial visual stimuli to uncover the complexity of visual processing in the primate brain, showing that the same algorithms used in the processing of natural images would manifest themselves in the processing of artificial visual stimuli. If CNNs are to be used as working models of the primate visual brain, it is equally critical to test whether a close brain–CNN correspondence exists for the processing of artificial objects.

Here, we compared human fMRI responses from three experiments with those from 14 different CNNs (including both shallow and very deep CNNs and a recurrent CNN) 20 . In particular, following Khaligh-Razavi and Kriegeskorte 8 and Cichy et al. 5 and using the lower bound of the noise ceiling from the human brain data as our threshold, we examined how well visual representational structures in the human brain may be captured by CNNs, with “fully capture” meaning that the brain-CNN correlation would be as good as the brain-brain correlation between the human participants, which in turn would indicate that CNN is able to fully account for the total amount of explainable brain variance. We found that while a number of CNNs were successful at fully capturing the visual representational structures of lower-level human visual areas during the processing of both the original and filtered real-world object images, none could do so for these object images at higher-level visual areas. In addition, none of the CNNs tested could fully capture the visual representations of artificial objects in lower-level human visual areas, with all but one also failing to do so for these objects in higher-level human visual areas. Some fundamental differences thus exist between the human brain and CNNs and preclude CNNs from fully modeling the human visual system at their current states.

In this study, we reexamined previous findings that showed close brain–CNN correspondence in visual processing 5 , 6 , 7 , 8 . We noticed the two studies that used the RSA approach were underpowered in two aspects. First, both Khaligh-Razavi and Kriegeskorte 8 and Cichy et al. 5 used an event-related fMRI design, known to produce a low signal-to-noise ratio (SNR). This can be seen in the low brain–CNN correlation values reported, with the highest correlation being less than 0.2 in both studies. While Cichy et al. 5 did not calculate the noise ceiling, thus making it difficult to assess how good the correlations were, the lower bounds of the noise ceiling were around 0.15–0.2 in Khaligh-Razavi and Kriegeskorte 8 , which is fairly low. Second, both studies defined human brain regions anatomically rather than functionally in each individual participant. This could affect the reliability of fMRI responses, potentially contributing to the low noise ceiling and low correlation obtained. Here, we took advantage of existing data sets from three fMRI experiments that overcome these drawbacks and compared visual processing in the human brain with those of 14 different CNNs. These data sets were collected while human participants viewed both unfiltered and filtered real-world object images and artificial object images. This allowed us to test not only the robustness of brain–CNN correlation, but also its generalization across different image sets. Because the RSA approach allows easy comparisons of multiple fMRI data sets with multiple CNNs, and because a noise ceiling can be easily derived to quantify the degree of the brain–CNN correspondence, we used this approach in the present study.

Our fMRI data were collected with a block design in which responses were averaged over a whole block of multiple exemplars to increase SNR. In three fMRI experiments, human participants viewed blocks of sequentially presented cut-out images on a gray background at fixation and pressed a response button whenever the same image repeated back to back (Fig.  1a ). Each image block contained different exemplars from the same object category, with the exemplars varied in identity, viewpoint/orientation, and pose (for the animal categories) to minimize the low-level similarities among them (see Supplementary Figs.  1 and 2 for the full set of images used). A total of eight real-world natural and manmade object categories were used, including bodies, cars, cats, chairs, elephants, faces, houses, and scissors 21 , 22 . In Experiment 1, both the original images and the controlled version of the same images were shown (Fig.  1b ). Controlled images were generated using the SHINE technique to achieve spectrum, histogram, and intensity normalization and equalization across images from the different categories 23 . In Experiment 2, the original, high and low SF contents of an image from six of the eight real-world object categories were shown (Fig.  1b ). In Experiment 3, both the images from the eight real-world image categories and images from nine artificial object categories 24 were shown (Fig.  1b ).

figure 1

A An illustration of the block design paradigm used. Participants performed a one-back repetition detection task on the images. An actual block in the experiment contained ten images with two repetitions per block. See “Methods” for more details. B The stimuli used in the three fMRI experiments. Experiment 1 included the original and the controlled images from eight real-world object categories. Experiment 2 included the images from six of the eight real-world object categories shown in the original, high SF, and low SF format. Experiment 3 included images from the same eight real-world object categories and images from nine artificial object categories. Each category contained ten different exemplars varying in identity, viewpoint/orientation, and pose (for the animal categories) to minimize the low-level image similarities among them. See Supplementary Figs.  1 and 2 for the full set of images used. C The human visual regions examined. They included topographically defined early visual areas V1–V4 and functionally defined higher object processing regions LOT and VOT. D The representational similarity analysis used to compare the representational structural between the brain and CNNs. In this approach, a representational dissimilarity matrix was first formed by computing all pairwise Euclidean distances of fMRI response patterns or CNN layer output for all the object categories. The off-diagonal elements of this matrix were then used to form a representational dissimilarity vector. These dissimilarity vectors were correlated between each brain region and each sampled CNN layer to assess the similarity between the two. C is reproduced from Xu and Vaziri-Pashkam 61 with permission.

For a given brain region, we averaged fMRI responses from a block of trials containing exemplars of the same category and extracted the beta weights (from a general linear model) for the entire block from each voxel. The responses from all the voxels in a given region were then taken as the fMRI response pattern for that object category in that brain region. Following this, fMRI response patterns were extracted for each category from six independently defined visual regions along the human occipito-temporal cortex (OTC). They included lower visual areas V1 to V4 and higher visual object processing regions in lateral occipito-temporal (LOT) and ventral occipito-temporal (VOT) cortex (Fig.  1c ). LOT and VOT have been considered as the homolog of the macaque inferotemporal (IT) cortex involved in visual object processing 25 . Their responses have been shown to correlate with successful visual object detection and identification 26 , 27 , and their lesions have been linked to visual object agnosia 28 , 29 .

The 14 CNNs we examined here included both shallower networks, such as Alexnet, VGG16, and VGG 19, and deeper networks, such as Googlenet, Inception-v3, Resnet-50, and Resnet-101 (Supplementary Table  1 ). We also included a recurrent network, Cornet-S, that has been shown to capture the recurrent processing in macaque IT cortex with a shallower structure 12 , 19 . This CNN is argued to be the current best model of the primate ventral visual regions 19 . All CNNs were pretrained with ImageNet images 30 . To understand how the specific training images would impact CNN representations, we also examined Resnet-50 trained with stylized ImageNet images 31 . Following a previous study (O’Connor et al., 2018 32 ), we sampled from 6 to 11 mostly pooling layers of each CNN (see Supplementary Table  1 for the CNN layers sampled). Pooling layers were selected because they typically mark the end of processing for a block of layers when information is pooled to be passed on to the next block of layers. We extracted the response from each sampled CNN layer for each exemplar of a category and then averaged the responses from the different exemplars to generate a category response, similar to how an fMRI category response was extracted. Following Khaligh-Razavi and Kriegeskorte 8 and Cichy et al. 5 , using RSA, we compared the representational structures of real-world and artificial object categories between the different CNN layers and different human visual regions.

The existence of brain–CNN correspondence for representing real-world object images

In Experiments 1 and 2, we wanted to verify the previously reported brain–CNN correspondence for representing real-world object images. We also tested if this finding can be generalized to filtered real-world images.

To compare the representational structure between the human brain and CNNs, in each brain region examined, we first calculated pairwise Euclidean distances of the z-normalized fMRI response patterns among the different object categories in each experiment, with shorter Euclidean distance indicating greater similarity between a pair of fMRI response patterns. From these pairwise Euclidean distances, we constructed a category representational dissimilarity matrix (RDM, see Fig.  1d ) for each of the six brain regions examined. Likewise, from the z-normalized category responses of each sampled CNN layer, we calculated pairwise Euclidean distances among the different categories to form a CNN category RDM for that layer. We then correlated category RDMs between brain regions and CNN layers using Spearman rank correlation following Nili et al. 33 and Cichy et al. 5 (Fig.  1d ). A Spearman rank correlation compares the representational geometry between the brain and a CNN without requiring the two to have a strictly linear relationship. All our results remained the same when Pearson correlation was applied and when correlation measures, instead of Euclidean distance measures, were used to construct the category RDMs (see Supplementary Figs.  3 , 6 , 7 , and 16 ).

Previous studies have reported a correspondence in representation between lower and higher CNN layers to lower and higher visual processing regions, respectively 5 , 8 . To evaluate the presence of such correspondence in our data, for each CNN, we identified the layer that showed the best RDM correlation with each of the six included brain regions in each participant. We then assessed whether the resulting layer numbers increased from low-to-high visual regions using Spearman rank correlation. If a close brain–CNN correspondence in representation exists, then the Fisher-transformed correlation coefficient of this Spearman rank correlation should be significantly above zero at the group level (one-tailed t tests were conducted to test for significance; one-tailed t tests were used as only values above zero are meaningful; all stats reported were corrected for multiple comparisons for the number of comparisons included in each experiment using the Benjamini–Hochberg procedure at false discovery rate q  = 0.05, see ref. 34 ).

In Experiment 1, we contrasted original real-world object images with the controlled version of these images. Figure  2a shows the average CNN layer that best correlated with each brain region for each CNN during the processing of these images (the exact significance levels of the brain–CNN correspondence are marked with asterisks at the top of each plot). Here, 10 out of the 14 CNNs examined showed a significant brain–CNN correspondence for the original images. The same correspondence was also seen for the controlled images, with 11 out of the 14 CNNs showing a significant brain–CNN correspondence.

figure 2

A The results from Experiment 1, in which original and controlled images from real-world object categories were shown. N  = 6 human participants. B The results from Experiment 2, in which full, high and low SF components of the images from real-world object categories were shown. N  = 10 human participants. C The results from Experiment 3, in which unaltered images from both real-world (natural) and artificial object categories were shown. N  = 6 human participants. Plotting here are the averaged CNN layer numbers across the human participants that showed the greatest RDM correlation for each brain region in each experimental condition, with the error bars indicating the standard errors of the mean across participants. To evaluate brain–CNN correspondence, in each human participant, the CNN layer that showed the highest RDM correlation with each of the six brain regions was identified. A Spearman rank correlation was carried out for each participant to assess whether the resulting layer numbers increased from low to high human visual regions. The resulting correlation coefficients (Fisher-transformed) were tested for greater than zero at the participant group level using one-tailed t tests. The asterisks at the top of each plot mark the significance level of these statistical tests, with a significant result indicating that the RDMs from lower CNN layers better correlated with those of lower than higher visual regions and the reverse is true for higher CNN layers. All t -tests were corrected for multiple comparisons for the number of image conditions included in each experiment using the Benjamini–Hochberg procedure. † p  < 0.1, * p  < 0.05, ** p  < 0.01, *** p  < 0.001. Source data are provided as a Source Data file.

In Experiment 2, we contrasted original real-world images with the high and low SF component versions of these images (Fig.  2b ). For the original images, we replicated the findings from Experiment 1, with 13 out of the 14 CNNs showing a significant brain–CNN correspondence. The same correspondence was also present in 13 CNNs for the high SF images and in 8 CNNs for the low SF images. In fact, Alexnet, Cornet-S, Googlenet, Inception-v3, Mobilenet-v2, Resnet-18, Resnet-50, Squeezenet, and VGG16 showed a significant brain–CNN correspondence for all five image sets across the two experiments. These results remained the same when correlations, instead of Euclidean distance measures, were used to construct the category RDMs, and Pearson, instead of Spearman, the correlation was applied to compare CNN and brain RDMs (Supplementary Fig.  3 ).

These results replicate previous findings using the RSA approach 5 , 8 and show that there indeed existed a brain–CNN correspondence, with representations in lower and higher visual areas better resembling those of lower and higher CNN layers, respectively. Importantly, such a brain–CNN correspondence is generalizable to filtered real-world object images.

Quantifying the amount of brain–CNN correspondence for representing real-world object images

A linear correspondence between CNN and brain representations, however, only tells us that lower CNN layers are relatively more similar to lower than higher visual areas and that the reverse is true for higher CNN layers. It does not tell us about the amount of similarity. To assess this, we evaluated how successfully the category RDM from a CNN layer could capture the RDMs from a brain region. To do so, we first obtained the reliability of the category RDM in a brain region across human participants by calculating the lower and upper bounds of the fMRI noise ceiling 33 . Overall, the lower bounds of fMRI noise ceilings for the different brain regions were much higher in our two experiments than those of Khaligh-Razavi and Kriegeskorte 8 (Supplementary Figs.  4A and 5A). These results indicate that the object category representational structures in our data are fairly similar and consistent across participants.

If the category RDM from a CNN layer successfully captures that from a brain region, then the correlation between the two should exceed the lower bound of the fMRI noise ceiling. This can be re-represented as the proportion of explainable brain RDM variance captured by the CNN (by dividing the brain–CNN RDM correlation by the lower bound of the corresponding noise ceiling and then taking the square of the resulting ratio; all correlation results are reported in Supplementary Figs.  4 – 7 ). For the original real-world object images in Experiment 1, the brain RDM variance from lower visual areas was fully captured by three CNNs (Fig.  3a ), including Alexnet, Googlenet, and Vgg16 (with no difference between 1 and the highest proportion of variance explained by a CNN layer for V1–V3, one-tailed t tests, ps  > 0.1; see the asterisks marking the exact significance levels at the top of each plot; one-tailed t tests were used here as only testing the values below 1 was meaningful; all p values reported were corrected for multiple comparisons for the 6 brain regions included using the Benjamini–Hochberg procedure at false discovery rate q  = 0.05). However, no CNN layer was able to fully capture the RDM variance from visual areas LOT and VOT (with significant differences between 1 and the highest proportion of variance explained by a CNN layer for LOT and VOT, p s < 0.05, one-tailed and corrected). The same pattern of results was observed when the controlled images were used in Experiment 1 (Fig.  3b ): several CNNs were able to fully capture the RDM variance of lower visual areas but none was able to do so for higher visual areas. We obtained similar results for the original, high SF, and low SF images in Experiment 2 (Fig.  4a–c ). Here again, a number of CNNs fully captured the RDM variance of lower visual areas, but none could do so for higher visual areas. All these results remained the same when correlations, instead of Euclidean distance measures, were used to construct the category RDMs, and Pearson, instead of Spearman, correlations were applied to compare CNN and brain RDMs (see the correlation results in Supplementary Figs.  4 – 7 ; note that although using Euclidean distance measures after pattern z-normalization to construct the RDMs produced highly similar results as those from correlation measures, they were not identical).

figure 3

A Results for the Original images. B Results for the Controlled images. N  = 6 human participants. The asterisks at the top of each plot mark the significance level of the difference between 1 and the highest proportion of variance explained by a CNN for each brain region; one-tailed t -tests were used as only values below 1 were meaningful here; all p values reported were corrected for multiple comparisons for the six brain regions included using the Benjamini–Hochberg procedure. Error bars indicate standard errors of the means. † p  < 0.1, * p  < 0.05, ** p  < 0.01, *** p  < 0.001. Source data are provided as a Source Data file.

figure 4

A Results for Full SF images. B Results for High SF images. C Results for Low SF images. N  = 10 human participants. The asterisks at the top of each plot mark the significance level of the difference between 1 and the highest proportion of variance explained by a CNN for each brain region; one-tailed t tests were used and all p values reported were corrected for multiple comparisons for the six brain regions included using the Benjamini–Hochberg procedure. Error bars indicate standard errors of the means. † p  < 0.1, * p  < 0.05, ** p  < 0.01, *** p  < 0.001. Source data are provided as a Source Data file.

In our fMRI experiments, we used a randomized presentation order for each of the experimental runs with two image repetitions. When we simulated the exact fMRI design in Alexnet by generating a matching number of randomized presentation sequences with image repetitions and then averaging CNN responses for these sequences, we obtained virtually identical Alexnet results as those without this simulation (Supplementary Fig.  4D ). Thus, the disagreement between our fMRI and CNN results could not be due to a difference in stimulus presentation. The very fact that CNN could fully capture the brain RDM variance in lower visual areas for real-world objects further supports this idea and additionally shows that the non-linearity in fMRI measures had a minimal impact on RDM extraction. The latter speaks to the robustness of the RSA approach as extensively reviewed elsewhere 16 .

Together, these results showed that, although lower layers of several CNNs could fully capture the explainable brain RDM variance for lower-level visual representations of both the original and filtered real-world object images in the human brain, none could do so for higher-level neural representations of these images. In fact, the highest amount of explainable brain RDM variance that could be captured by CNNs from higher visual regions LOT and VOT was about 60%, on par with previous neurophysiological results from macaque IT cortex 9 , 10 , 18 , 19 .

To directly visualize the object representational structures in different brain regions and CNN layers, using multi-dimensional scaling (MDS, Shepard, 1980 35 ), we placed the RDMs on 2D spaces with the distances among the categories approximating their relative similarities to each other. Figure  5a, b shows the MDS plots from the two lowest and the two highest brain regions examined (i.e., V1, V2, LOT, and VOT) and from the two lowest and the two highest layers sampled from four examples CNNs (i.e., Alexnet, Cornet-S, Googlenet, and Vgg-19) from Experiments 1 and 2 (see Supplementary Figs.  8–12 for the MDS plots from all brain regions and CNN layers sampled). Consistent with our quantitative analysis, for the real-world objects, there were some striking brain–CNN representational similarities at lower levels of object representation (such as in Alexnet and Googlenet). At higher levels, both the brain and CNNs showed a broad distinction between animate and inanimate objects (i.e., bodies, cats, elephants, and faces vs. cars, chairs, houses, and scissors), but they differed in how these categories were represented relative to each other. For example, within the animate objects, while faces and bodies are far apart in both VOT and LOT, they are next to each other in higher CNN layers (see the objects marked by the dotted circles in Fig.  5 ); and within the inanimate objects, while cars, chairs, houses, and scissors tend to form a square in VOT and LOT, they tend to form a line in higher CNN layers (see the objects marked by the dashed ovals in Fig.  5 ).

figure 5

A Results for the Original real-world object images. B Results for the Controlled real-world object images. C Results for the artificial object images. Brain responses included here are those for the original real-world images from both Experiments 1 and 3, those for the controlled real-world images from Experiment 1, and those for the artificial object images from Experiment 3. The distances among the object categories in each MDS plot approximate their relative similarities to each other in the corresponding RDM. Only MDS plots from the two lowest and the two highest brain regions examined (i.e., V1, V2, LOT, and VOT) and from the two lowest and two highest layers sampled from four examples, CNNs (i.e., Alexnet, Cornet-S, Googlenet, and Vgg-19) are included here. See Supplementary Figs.  8 – 12 and 17 for MDS plots from all brain regions and CNN layers examined. Since rotations and flips preserve distances on these MDS plots, to make these plots more informative and to see how the representational structure evolved across brain regions and CNN layers, we manually rotated and/or flipped each MDS when necessary. For real-world objects, there were some remarkable brain–CNN similarities at lower levels of object representations (see Alexnet and Googlenet). At higher levels, although both showed a broad distinction between animate and inanimate objects (i.e., bodies, cats, elephants, and faces vs. cars, chairs, houses, and scissors), they differ in how categories are represented from each other. For example, within the animate objects, while faces and bodies are far apart in both VOT and LOT, they are next to each other in higher CNN layers (see the objects marked by the dotted circles in ( A ); and within the inanimate objects, while cars, chairs, houses, and scissors tend to form a square in VOT and LOT, they tend to form a line in higher CNN layers (see the objects marked by the dashed ovals in ( A ). For the artificial object images, brain–CNN differences at the lower level are not easily interpretable. Differences at the higher level suggest that while the brain takes both local and global shape similarities into account when grouping objects, CNNs rely mainly on local shape similarities. This can be seen in the grouping of the objects at higher CNN layers and by comparing the purple and fuchsia shapes that share the same global but different local features (see the objects marked by the dotted circles in ( C )). Source data are provided as a Source Data file.

LOT and VOT included a large swath of the ventral and lateral OTC and likely overlapped to a great extent with regions selective for specific object categories, such as faces, bodies, or scenes. Because CNNs may not automatically develop category-selective units during object categorization training, it is possible that the brain–CNN RDM discrepancy we observed so far at higher levels of visual processing is solely driven by the category-selective voxels in the human brain. To investigate this possibility, using the main experimental data, we evaluated the category selectivity of each voxel in LOT and VOT (see “Methods”). We then excluded all voxels showing a significant category selectivity for faces, bodies, or scenes (i.e., houses) and repeated our analysis. In most cases, the amount of the brain RDM variance that could be capture by CNNs remained unchanged whether or not category-selective voxels were included or excluded (see Supplementary Figs.  13 and 14 ). Significant differences were observed in only 6% of the comparisons ( ps  < 0.05, uncorrected, see the caption of Supplementary Figs.  13 and 14 for a list of these cases). However, even in these cases, the maximum amount of LOT and VOT RDM variance captured by CNNs was still significantly less than 1 ( ps  < 0.05, corrected). Moreover, when the same unaltered images were shown across the different experiments, the improvement seen in one experiment was not replicated in another experiment (e.g., the improvement seen in Alexnet for Experiment 2 Full-SF was not replicated in Experiment 3 Natural, see Supplementary Figs.  14 and 18 ). Consistent with these results, MDS plots for LOT and VOT look quite similar whether or not category-selective voxels were included (see Supplementary Figs.  8–12 ). As such, the failure of CNNs to fully capture brain RDM at higher levels of visual processing cannot be attributed to the presence of category-selective voxels in LOT and VOT.

One could argue that CNNs generally do not encounter disembodied heads or headless bodies in their training data. They are thus unlikely to have distinctive representations for heads and bodies. Note that the human visual system generally does not see such stimuli in its training data either. The goal of the study is, therefore, not to test images that a system has been exposed to during training, but rather how it handles images that it has not. If the two systems are similar in their underlying representation, then they should still respond similarly to images that they have not been exposed to during training. If not, then it indicates that the two systems represent visual objects in different ways. We present a stronger test case in the next experiment by comparing the representations of artificial visual stimuli between the brain and CNNs.

The brain–CNN correspondence for representing artificial object images

Previous comparisons of visual processing in the brain and CNN have focused entirely on the representation of real-world objects. Decades of visual neuroscience research, however, has successfully utilized simple and artificial visual stimuli to uncover the complexity of visual processing in the primate brain (e.g., 36 , 37 , 38 , 39 ), with Tanaka and colleagues, in particular, showing that IT responses to some real-world objects are highly similar to their responses to artificial shapes 39 . The same algorithms used in the processing of natural images thus manifest themselves in the processing of artificial visual stimuli. If CNNs are to be used as working models of the primate visual brain, it would be critical to test if this principle applies to CNNs.

Testing simple and artificial visual stimuli also allows us to address a remaining concern for the results obtained so far. It could be argued that the reason CNNs performed poorly in fully tracking high-level processing of the real-world objects even when category-selective voxels were removed was due to interactions between category-selective and non-selective brain regions. With the artificial visual stimuli, however, no preexisting category information, semantic knowledge, as well as experience with the stimuli could affect visual processing at a higher level. This would put the brain and CNN on even grounds. If CNNs still fail to track the processing of the artificial visual stimuli at higher levels, it would indicate some fundamental differences in how the brain and CNNs process visual information, rather than the particularity of the stimuli used.

In Experiment 3, we compared the processing of both real-world objects and artificial objects between the brain and CNNs. As in Experiments 1 and 2, the processing of real-world objects showed a consistent brain–CNN correspondence in 8 out of the 14 CNNs tested (Fig.  2c ). The same correspondence was also obtained in eight CNNs when artificial objects were shown, with lower visual representations in the brain better resembling those of lower than higher CNN layers and the reverse is true for higher visual representations in the brain (Fig.  2c and Supplementary Fig.  3 ). In fact, across Experiments 1–3, Alexnet, Cornet-S, Googlenet, Resnet-18, Resnet-50, Squeezenet, and VGG16 were the seven CNNs showing a consistent brain–CNN correspondence across all our image sets, including the original and filtered real-world object images, as well as the artificial object images.

As before, for real-world objects, while some of the CNNs were able to fully capture the brain RDM variance from lower visual areas, none could do so for higher visual areas (Fig.  6a ). For artificial object images, while the majority of the CNNs still failed to fully capture the brain RDM variance of higher visual areas, surprisingly, no CNN was able to do so for lower visual areas anymore (with significant differences between 1 and the highest proportion of variance explained by a CNN layer for V1 and V2, all p s < 0.05, one-tailed and corrected; see the asterisks marking the exact significance levels at the top of each plot for the full stats). In fact, the amount of the brain RDM variance captured in lower visual areas dropped significantly or marginally significantly between the natural and artificial objects in several CNNs (Alexnet, p  = 0.062 for V1, p  = 0.074 for V2; Googlenet, p  = 0.012 for V1, p  = 0.023 for V2; Mobilenet-v2, p  = 0.032 for V2; Squeezenet, p  = 0.022 for V1, p  = 0.0085 for V2; Vgg-16, p  = 0.003 for V1, p  = 0.0042 for V2, p  = 0.094 for V3; and Vgg-19, p  = 0.048 for V1, p  = 0.0077 for V2; all reported p values were corrected for multiple comparisons for the six brain regions examined). This rendered the few CNNs that were capable of fully capturing the brain variance from the lower visual areas during the processing of real-world objects no longer able to do so during the processing of artificial objects (Fig.  6b ; see also the correlation results in Supplementary Figs.  15 and 16 ). In other words, as a whole, CNNs performed much worse in capturing visual processing of artificial than real-world objects in the human brain, and their ability to capture lower-level visual processing of real-world objects in the brain did not generalize to the processing of artificial objects.

figure 6

A Results for real-world object images. B Results for artificial object images. N  = 6 human participants. The asterisks at the top of each plot mark the significance level of the difference between 1 and the highest proportion of variance explained by a CNN for each brain region; one-tailed t tests were used and all p values reported were corrected for multiple comparisons for the six brain regions included using the Benjamini–Hochberg procedure. Error bars indicate standard errors of the means. † p  < 0.1, * p  < 0.05, ** p  < 0.01, *** p  < 0.001. Source data are provided as a Source Data file.

For artificial objects, RDM differences between lower brain regions and lower CNN layers were not easily interpretable from the MDS plots (Fig.  5c and Supplementary Fig.  17 ). RDM differences between higher brain regions and higher CNN layers suggest that while the brain takes both local and global shape similarities into consideration when grouping objects, CNNs rely mainly on local shape similarities (e.g., compare higher brain and CNN representations of the shapes marked by purple and fuchsia colors that share the same global but different local features; see the objects marked by the dotted circles in Fig.  5c ). This is consistent with other findings that specifically manipulated local and global shape similarities (see “Discussion”). Lastly, as in Experiments 1 and 2, removing the category-selective voxels in LOT and VOT did not improve CNN performance (see Supplementary Fig.  18 ).

Overall, taking both the linear correspondence and RDM correlation into account, none of the CNNs examined here could fully capture lower or higher levels of neural processing of artificial objects. This is particularly critical given that a number of CNNs were able to fully capture the lower-level neural processing of real-world objects.

The effect of training a CNN on original vs. stylized image-net images

Although CNNs are believed to explicitly represent object shapes in the higher layers 1 , 40 , 41 , emerging evidence suggests that CNNs may largely use local texture patches to achieve successful object classification 42 , 43 or local rather than global shape contours for object recognition 44 . In a recent demonstration, CNNs were found to be poor at classifying objects defined by silhouettes and edges. In addition, when texture and shape cues were in conflict, they classified objects according to texture rather than shape cues 31 (see also ref. 44 ). However, when Resnet-50 was trained with stylized ImageNet images in which the original texture of every single image was replaced with the style of a randomly chosen painting, object classification performance significantly improved, relied more on shape than texture cues, and became more robust to noise and image distortions 31 . It thus appears that a suitable training data set may overcome the texture bias in standard CNNs and allow them to utilize more shape cues.

We tested if the category RDM in a CNN may become more brain-like when a CNN was trained with stylized ImageNet images. To do so, we compared the representations formed in Resnet-50 pretrained with ImageNet images with those from Resnet-50 pretrained with three other procedures: 31 trained only with the stylized ImageNet Images, trained with both the original and the stylized ImageNet Images, and trained with both sets of images and then fine-tuned with the stylized ImageNet images. Despite differences in training, the category RDM correlations between brain regions and CNN layers were remarkably similar among these Resnet-50s, and all were substantially different from those of the human visual regions (Supplementary Fig.  19 ). If anything, training with the original ImageNet images resulted in a better brain–CNN correspondence in several cases than the other training conditions. The incorporation of stylized ImageNet images in training thus did not result in more brain-like visual representations in Resnet-50.

It has become common practice in recent human fMRI research to regard CNNs as a working model of the human visual system. This is largely based on fMRI studies showing that representations formed in CNN lower and higher layers track those of the human lower and higher visual processing regions, respectively 5 , 6 , 7 , 8 . Here, we reevaluated this finding with more robust fMRI data sets from 3 experiments and 14 different CNNs and tested the generality of this finding to filtered real-world object images and artificial object images.

We found a significant correspondence in visual representational structure between the CNNs and the human brain across various image manipulations for both real-world and artificial object images, with representations formed in CNN lower layers more closely resembling those of lower than higher human visual areas and the reverse being true for higher CNN layers. In addition, we found that lower layers of several CNNs fully captured the representational structures of real-world objects of human lower visual areas for both the original and the filtered versions of these images. This replicated earlier results and showed that CNNs are capable of capturing some aspects of visual processing in the human brain.

Despite these successes, however, no CNN tested could fully capture the representational structure of the real-world object images in human higher visual areas. The same results were obtained regardless of whether or not category-selective voxels were included in human higher visual areas. Overall, the highest amount of explainable brain RDM variance that could be captured by CNNs from higher visual regions was about 60%. This is in agreement with previous neurophysiological studies on Macaque IT cortex 9 , 10 , 18 , 19 . When artificial object images were used, not only did most of the CNNs still fail to capture visual processing in higher human visual areas but also none could do so for lower human visual areas. Overall, no CNN examined could fully capture all levels of visual processing for both real-world and artificial objects, with similar performance observed in both shallow and deep CNNs (e.g., Alexnet vs. Googlenet). Although the recurrent CNN examined here, Cornet-S closely models neural processing and is argued to be the current best model of the primate ventral visual regions 12 , 19 , it did not outperform the other CNNs. The same results were also obtained when a CNN was trained with stylized object images that emphasized shape features in its representation. The main results across the three experiments are summarized in Fig.  7 , with Fig.  7a showing the results from the six conditions across the three experiments examining the real-world objects (i.e., the results from Figs.  3 , 4 , and 6a ) and Fig.  7b showing the results for the artificial objects (i.e., the results from Fig.  6b ). Alexnet, Googlenet, Squeezenet, and Vgg-16 showed the best brain–CNN correspondence overall for representing real-world objects among the 14 CNNs examined.

figure 7

A Summary of results from the six conditions across the three experiments that examined the processing of real-world object images (i.e., a summary of results from Figs.  3 , 4 , and 6 ). B Summary of results for the processing of artificial objects (i.e., results from Fig.  6b ). In A , each colored bar represents the averaged proportion of brain variance explained, with that from each condition marked by a black symbol. For real-world objects, a few CNNs (i.e., Alexnet, Googlenet, Squeezenet, and Vgg-16) were able to consistently capture brain RDM variance from lower human visual regions (i.e., V1–V3). No CNN was able to do so for higher human visual regions (i.e., LOT and VOT). The CNNs capable of fully capturing lower-level brain RDM variance for real-world objects all failed to capture that of the artificial objects from neither lower nor higher human visual regions. Source data are provided as a Source Data file.

Although we examined object category responses averaged over multiple exemplars rather than responses to each object, previous research has shown similar category and exemplar response profiles in macaque IT and human lateral occipital cortex with more robust responses for categories than individual exemplars due to an increase in SNR 45 , 46 . Rajalingham et al. 2 additionally reported better behavior-CNN correspondence at the category but not at the individual exemplar level. Thus, comparing the representational structure at the category level, rather than at the exemplar level, should have increased our chance of finding a close brain–CNN correspondence. Yet despite the overall brain and CNN correlations for object categories being much higher here than in previous studies for individual objects 5 , 8 , CNNs failed to fully capture the representational structure of real-world objects in the human brain and performed even worse for artificial objects. Object category information is shown to be better represented by higher than lower visual regions (e.g., 47 ). Our use of object category was thus not optimal for finding a close brain–CNN correspondence at lower levels of visual processing. Yet we found better brain–CNN correspondence at lower than higher levels of visual processing for real-world object categories. This suggests that information that defines the different real-world object categories is present at lower levels of visual processing and is captured by both lower visual regions and lower CNN layers. This is not surprising as many categories may be differentiated based on low-level features even with a viewpoint/orientation change, such as curvature and the presence of unique features (e.g., the large round outline of a face/head, the protrusion of the limbs in animals) 48 . Finally, it could be argued that the dissimilarity between the brain and CNNs at higher levels of visual processing for real-world object categories could be driven by feedback from high-level nonvisual regions and/or feedback from category-selective regions in the human ventral cortex for some of the categories used (i.e., faces, bodies, and houses). However, such feedback should greatly decrease for artificial object categories. Yet we failed to see much improvement in brain–CNN correspondence at higher levels of processing for these objects. If anything, even the strong correlation at lower levels of visual processing for real-world objects no longer existed for these artificial objects.

Decades of vision science research has relied on using simple and artificial visual stimuli to uncover the complexity of visual processing in the primate brain, showing that the same algorithms used in the processing of natural images would manifest themselves in the processing of artificial visual stimuli. The artificial object images tested here have been used in previous fMRI studies to understand object processing in the human brain (e.g., Op de Beeck et al., 2008 21 , 24 , 27 ). In particular, we showed that the transformation of visual representational structures across occipito-temporal and posterior parietal cortices follows a similar pattern for both the real-world objects and the artificial objects used here 21 . The disconnection between the representation of real-world and artificial object images in CNNs is in disagreement with this long-held principle in primate vision research and suggests that, even at lower levels of visual processing, CNNs differ from the primate brain in fundamental ways. Such a divergence will undoubtedly contribute to even greater divergence at higher levels of processing between the primate brain and CNNs.

Using real-world object images, recent studies have tried to improve brain and CNN RDM correlation by incorporating brain responses during CNN training. Using a recurrent network architecture, Kietzmann et al. 49 used both brain RDM and object categorization to guide CNN training and found that brain and CNN RDM correlation was still significantly below the noise ceiling in all human ventral visual regions examined. Khaligh-Razavi et al. 50 used a mixed RSA approach by first finding the best linear transformation between fMRI voxels and CNN layer units and then performing RDM correlations (see also ref. 10 ). The key idea here is that CNNs may contain all the right brain features in visual processing but that these features are improperly combined. Training enables remixing and recombination of these features and can result in a better brain–CNN alignment in representational structure. Using the mixed RSA approach, Khaligh-Razavi et al. 50 reported that the correlation between brain and CNN was able to reach the noise ceiling for LO. However, brain–CNN correlations were fairly low for all brain regions examined (i.e., V1–V4 and LO), with noise ceiling being just below 0.5 in V1 to just below 0.2 in LO (thus the amount of explainable variance was less than 4% in LO, which is really low). The low LO noise ceiling again raises concerns about the robustness of this finding (as it did for ref. 8 ). Khaligh-Razavi et al. 50 used a large data set from Kay et al. 51 , which contained 1750 unique training images with each shown twice, and 120 unique testing images with each shown 13 times. Our data in comparison are limited, containing between 16 and 18 different stimulus conditions, each shown between 16 to 18 times. We are thus underpowered to perform the mixed RSA analysis here to provide an objective evaluation of this approach. It should be noted that applying the mixed RSA analysis is not as straightforward as it seems, as we do not fully understand the balance between decreased model performance due to overfitting and increased model performance due to feature mixing, as well as the minimum amount of data needed for training and testing. In addition, a mixed RSA approach requires brain responses from a large number of single images. This will necessarily result in lower power and lower reliability across participants. In other words, due to noise, only a small amount of consistent neural responses are preserved across participants (as in Khaligh-Razavi et al. 50 ), resulting in much of the neural data used to train the model likely just being subject-specific noise. This can significantly weaken the mixed RSA approach. In addition, whether a mixed RSA model trained with one kind of object image (e.g., real-world object images) may accurately predict the responses from another kind of object image (e.g., artificial object images) has not been tested. Thus, although the general principle of a mixed RSA approach is promising, what it can actually deliver remains to be seen. In our study, we found good brain–CNN correspondence between lower CNN layers and lower visual areas for processing real-world objects. Thus, the mixing of the different features in lower CNN layers is well-matched with that of lower visual areas. Yet these lower CNN layers fail to capture lower visual areas’ responses for artificial objects. This indicates that some fundamental differences exist between the brain and CNNs at lower levels of visual processing that may not be overcome by remixing the CNN features.

What could be driving the difference between the brain and CNNs in visual processing? In recent studies, Baker et al. 44 and Geirhos et al. 31 , 52 reported that CNNs rely on local texture and shape features rather than global shape contours. This may explain why in our study lower CNN layers were able to fully capture the representational structures of real-world object images in lower visual areas, as processing in these brain areas likely relies more on local contours and texture patterns given their smaller receptive field sizes. As high-level object vision relies more on global shape contour processing (e.g., 53 ), the lack of such processing in CNNs may account for CNNs’ inability to fully capture processing in higher visual areas. This can be seen more directly in higher-level representations of our artificial objects (which share similar texture and contour elements at the local level but differ in how these elements are conjoined at the local and global levels). Specifically, while the brain takes both local and global shape similarities into consideration when grouping these objects, CNNs may rely mainly on local shape similarities (see the MDS plots in Fig.  5 and Supplementary Fig.  17 ). At lower levels of visual processing, the human brain likely encodes both shape elements and how they are conjoined at the local level to help differentiate the different artificial objects. CNNs, on the other hand, may rely more on the presence/absence of a particular texture patch or a shape element than on how they are conjoined at the local level to differentiate these objects. This may account for the divergence between the brain and CNNs at lower levels of visual processing for these artificial objects. Training with stylized images did not appear to improve performance in Resnet-50, suggesting that the differences between CNNs and the human brain may not be overcome by this type of training.

In two other studies involving real-world object images, we found additional differences between the human brain and CNNs in the development of transformation tolerant visual representations and the relative coding strength of object identity and nonidentity features 54 , 55 . Forming transformation-tolerant object identity representation has been argued to be the hallmark of primate vision, as it reduces the complexity of learning by requiring much fewer training examples and with the resulting representations being more generalizable to new instances of an object (e.g., in different viewing conditions) and to new exemplars of a category not included in the training. It could potentially dictate how objects are organized in the representational space in the brain, as examined in this study. While the magnitude of invariant object representation increases from lower to higher visual areas in the human brain, in the same 14 CNNs tested here, such invariance actually goes down from lower to higher CNN layers 54 . With its vast computing power, CNNs likely associate different instances of an object via a brute force approach (e.g., by simply grouping all instances of an object encountered under the same object label) without necessarily preserving the relationships among the objects across transformations and forming transformation-tolerant object representations. This again suggests that CNNs use a fundamentally different mechanism to group objects and solve the object recognition problem compared to the primate brain. In another study 55 , we documented the relative coding strength of object identity and nonidentity features during visual processing in the human brain and CNNs. We found that identity representation increased and nonidentity feature representation decreased along the ventral visual pathway. In the same 14 CNNs examined here, while identity representation increased over the course of visual processing, nonidentity feature representation showed an initial large increase followed by a decrease at later stages of processing, different from the brain responses. As a result, higher CNN layers deviated more from the corresponding brain regions than lower layers did in how object identity and nonidentity features are coded with respect to each other. This is consistent with the RDM comparison results reported in this study.

CNNs’ success in object categorization and their response correspondence with the primate visual areas have opened the exciting possibility that perhaps we can use CNN modeling as a viable scientific method to study primate vision. Presently, the detailed computations performed by CNNs are difficult for humans to understand, rendering them poorly understood information processing systems 3 , 56 . By analyzing results from three fMRI experiments and comparing visual representations in the human brain with 14 different CNNs, we found that CNNs’ performance is related to how they are built and trained: they are built following the known architecture of the primate lower visual areas and are trained with real-world object images. Consequently, the best performing CNNs (i.e., Alexnet, Googlenet, Squeezenet, and Vgg-16) are successful at fully capturing the visual representational structures of lower human visual areas during the processing of both original and filtered real-world images, but not those of higher human visual areas during the processing of these images or that of artificial images at either level of processing. The close brain–CNN correspondence found in earlier fMRI studies thus might have been overly optimistic by including only real-world objects (which CNNs are generally trained on) and testing on data with relatively lower power. When we expanded the comparisons here to a broader set of filtered real-world stimuli and to artificial stimuli as well as testing on brain data with a higher power, we see large discrepancies between the brain and CNNs at both lower and higher levels of visual processing. While CNNs are successful in object recognition, some fundamental differences likely exist between the human brain and CNNs and preclude CNNs from fully modeling the human visual system at their current states. This is unlikely to be remedied by simply changing the training images, changing the depth of the network, and/or adding recurrent processing. But rather, some fundamental changes may be needed to make CNNs more brain-like. This may only be achieved by our continuous research effort on understanding the precise algorithms used by the primate brain in visual processing to further guide CNN model development.

fMRI experimental details

Details of the fMRI experiments have been described in two previously published studies 21 , 22 . They are summarized here for the readers’ convenience (see also Table  1 ).

Six, ten, and six healthy human participants with normal or corrected to normal visual acuity, all right-handed and aged between 18 and 35, took part in Experiments 1–3, respectively. The sample size for each fMRI experiment was chosen based on prior published studies (e.g., 57 , 58 ). All participants gave their written informed consent before the experiments and received payment for their participation. The experiments were approved by the Committee on the Use of Human Subjects at Harvard University. Each main experiment was performed in a separate session lasting between 1.5 and 2 h. Each participant also completed two additional sessions for topographic mapping and functional localizers. MRI data were collected using a Siemens MAGNETOM Trio, A Tim System 3T scanner, with a 32-channel receiver array head coil. For all the fMRI scans, a T2*-weighted gradient echo pulse sequence with TR of 2 s and a voxel size of 3 mm × 3 mm × 3 mm was used. fMRI data were analyzed using FreeSurfer (surfer.nmr.mgh.harvard.edu), FsFast 59 , and in-house MATLAB codes. FMRI data preprocessing included 3D motion correction, slice timing correction, and linear and quadratic trend removal. Following standard practice, a general linear model was applied to the fMRI data to extract beta weights as response estimates.

In Experiment 1, we used cut-out gray-scaled images from eight real-world object categories (faces, bodies, houses, cats, elephants, cars, chairs, and scissors) and modified them to occupy roughly the same area on the screen (Fig.  1b ). For each object category, we selected ten exemplar images that varied in identity, viewpoint/orientation, and pose (for the animal categories) to minimize the low-level similarities among them (see Supplementary Fig.  1 for the full set of images used). In this and the two experiments reported below, objects were always presented at fixation, and object positions never varied. In the original image condition, unaltered images were shown. In the controlled image condition, images were shown with contrast, luminance, and spatial frequency equalized across all the categories using the SHINE toolbox 23 (see Fig.  1b ). Participants fixated at a central red dot throughout the experiment. Eye-movements were monitored in all the fMRI experiments to ensure proper fixation.

During the experiment, blocks of images were shown. Each block contained a random sequential presentation of ten exemplars from the same object category. Each image was presented for 200 ms followed by a 600 ms blank interval between the images (Fig.  1a ). Participants detected a one-back repetition of the exact same image. This task-focused participants’ attention on the object shapes and ensured robust fMRI responses. However, similar visual representations may be obtained when participants attended to the color of the objects 60 , 61 (see also 62 ). Two image repetitions occurred randomly in each image block. Each experimental run contained 16 blocks, one for each of the 8 categories in each image condition (original or controlled). The order of the eight object categories and the two image conditions were counterbalanced across runs and participants. Each block lasted 8 s and was followed by an 8-s fixation period. There was an additional 8-s fixation period at the beginning of the run. Each participant completed one scan session with 16 runs for this experiment, each lasting 4 min 24 s.

In Experiment 2, only six of the original eight object categories were used including faces, bodies, houses, elephants, cars, and chairs. Images were shown in 3 conditions: Full-SF, High-SF, and Low-SF. In the Full-SF condition, the full spectrum images were shown without modification of the SF content. In the High-SF condition, images were high-pass filtered using an FIR filter with a cutoff frequency of 4.40 cycles per degree (Fig.  1b ). In the Low-SF condition, the images were low-pass filtered using an FIR filter with a cutoff frequency of 0.62 cycles per degree (Fig.  1b ). The DC component was restored after filtering so that the image backgrounds were equal in luminance. Each run contained 18 blocks, one for each of the category and SF condition combinations. Each participant completed a single scan session containing 18 experimental runs, each lasting 5 min. Other details of the experiment design were identical to that of Experiment 1.

In Experiment 3, we used unaltered images from both real-world and artificial object categories. The real-world categories were the same eight categories used in Experiment 1, with the exemplars varying in identity, viewpoint/orientation, and pose (for the animal categories) to minimize the low-level similarities among them. The artificial object categories were nine categories of computer-generated 3D shapes (ten images per category) adopted from Op de Beeck et al. 24 and shown in random orientations to increase image variation within a category and to match the image variation of the exemplars used for the real-world object categories (see Fig.  1b ; for the full set of artificial object images used, see Supplementary Fig.  2 ). Each run of the experiment contained 17 stimulus blocks, one for each object category (either real-world or artificial). Each participant completed 18 runs, each lasting 4 min 40 s. Other details of the experiment design were identical to that of Experiment 1.

We examined responses from independent localized lower visual areas V1–V4 and higher visual processing regions LOT and VOT. V1–V4 were mapped with flashing checkerboards using standard techniques 63 . Following the detailed procedures described in Swisher et al. 64 and by examining phase reversals in the polar angle maps, we identified areas V1–V4 in the occipital cortex of each participant (see also ref. 65 ) (Fig.  1c ). To identify LOT and VOT, following Kourtzi and Kanwisher 66 , participants viewed blocks of face, scene, object, and scrambled object images. These two regions were then defined as a cluster of contiguous voxels in the lateral and ventral occipital cortex, respectively, that responded more to the original than the scrambled object images (Fig.  1c ). LOT and VOT loosely correspond to the location of LO and pFs 66 , 67 , 68 but extend further into the temporal cortex in an effort to include as many object-selective voxels as possible in occipito-temporal regions.

LOT and VOT included a large swath of the ventral and lateral OTC and likely overlapped to a great extent with regions selective for specific object categories, including faces, bodies or scenes. To understand how the inclusion of these category-specific regions may affect the brain–CNN correlation, we also constructed LOT and VOT ROIs without the category-selective voxels. This was done by testing the category selectivity of each voxel in these two ROIs using the data from the main experiment. Specifically, since there were at least 16 runs in each experiment, using paired t tests, we defined a LOT or a VOT voxel as face-selective if its response was higher for faces than for each of the other non-face categories at p  < 0.05. Similarly, a voxel was defined as body-selective if its response was higher for the average of bodies, cats, and elephants (in Experiment 2, only the average of bodies and elephants was used as cats were excluded in the experiment) than for each of the non-body categories at p  < 0.05. Finally, a voxel was defined as scene-selective if its response was higher for houses than for each of the other non-scene categories at p  < 0.05. In this analysis, a given object category’s responses in the different formats (e.g., original and controlled) were averaged together. Given that each experiment contained at least 16 runs, using the main experimental data to define the category-selective voxels in LOT and VOT is comparable to how these voxels are traditionally defined. We used a relatively lenient threshold hold of p  < 0.05 here to ensure that we excluded any voxels that exhibited any category selectivity, even if this occurred just by chance.

To generate the fMRI response pattern for each ROI in a given run, we first convolved an 8-s stimulus presentation boxcar (corresponding to the length of each image block) with a hemodynamic response function to each condition; we then conducted a general linear model analysis to extract the beta weight for each condition in each voxel of that ROI. These voxel beta weights were used as the fMRI response pattern for that condition in that run. Following Tarhan and Konkle 69 , we selected the top 75 most reliable voxels in each ROI for further analyses. This was done by splitting the data into odd and even halves, averaging the data across the runs within each half, correlating the beta weights from all the conditions between the two-halves for each voxel, and then selecting the top 75 voxels showing the highest correlation. This is akin to including the best units in monkey neurophysiological studies. For example, Cadieu et al. 10 only selected a small subset of all recorded single units for their brain–CNN analysis. We obtained the fMRI response pattern for each condition from the 75 most reliable voxels in each ROI of each run. We then averaged the fMRI response patterns across all runs and applied z-normalization to the averaged pattern for each condition in each ROI to remove amplitude differences between conditions and ROIs.

CNN details

We tested 14 CNNs in our analyses (see Supplementary Table  1 ). They included both shallower networks, such as Alexnet, VGG16, and VGG 19, and deeper networks, such as Googlenet, Inception-v3, Resnet-50, and Resnet-101. We also included a recurrent network, Cornet-S, that has been shown to capture the recurrent processing in macaque IT cortex with a shallower structure 12 , 19 . This CNN has been recently argued to be the current best model of the primate ventral visual processing regions 19 . All the CNNs used were trained with ImageNet images 30 .

To understand how the specific training images would impact CNN representations, besides CNNs trained with ImageNet images, we also examined Resnet-50 trained with stylized ImageNet images 31 . We examined the representations formed in Resnet-50 pretrained with three different procedures 31 : trained only with the stylized ImageNet Images (RN50-SIN), trained with both the original and the stylized ImageNet Images (RN50-SININ), and trained with both sets of images and then fine-tuned with the stylized ImageNet images (RN50-SININ-IN).

Following O’Connel & Chun 32 , we sampled between 6 and 11 mostly pooling and FC layers of each CNN (see Supplementary Table  1 for the specific CNN layers sampled). Pooling layers were selected because they typically mark the end of processing for a block of layers when information is pooled to be passed on to the next block of layers. When there were no obvious pooling layers present, the last layer of a block was chosen. For a given CNN layer, we extracted the CNN layer output for each object image in a given condition, averaged the output from all images in a given category for that condition, and then z-normalized the responses to generate the CNN layer response for that object category in that condition (similar to how fMRI category responses were extracted). Cornet-S and the different versions of Resnet-50 were implemented in Python. All other CNNs were implemented in Matlab. The output from all CNNs was analyzed and compared with brain responses using Matlab.

Comparing the representational structures between the brain and CNNs

To determine the extent to which object category representations were similar between brain regions and CNN layers, we correlated the object category representational structure between brain regions and CNN layers. To do so, we obtained the RDM from each brain region by computing all pairwise Euclidean distances for the object categories included in an experiment and then taking the off-diagonal values of this RDM as the category dissimilarity vector for that brain region. This was done separately for each participant. Likewise, from the CNN layer output, we computed pairwise Euclidean distances for the object categories included in an experiment to form the RDM and then taking the off-diagonal values of this RDM as the category dissimilarity vector for that CNN layer. We applied this procedure to each sampled layer of each CNN.

We then correlated the category dissimilarity vectors between each brain region of each participant and each sampled CNN layer. Following Cichy et al. 5 , all correlations were calculated using Spearman rank correlation to compare the rank order, rather than the absolute magnitude, of the category representational similarity between the brain and CNNs (see also ref. 33 similar results were obtained, however, when Pearson correlation was used instead, see the results reported in Supplementary Figs.  3 , 6 , 7 , and 16 ). All correlation coefficients were Fisher z-transformed before group-level statistical analyses were carried out.

To evaluate the correspondence in representation between lower and higher CNN layers to lower and higher visual processing regions, for each CNN examined, we identified, in each human participant, the CNN layer that showed the best RDM correlation with each of the six brain regions included. We then assessed whether the resulting layer numbers increased from low to high visual regions using Spearman rank correlation. Finally, we tested the resulting correlation coefficients at the participant group level. If a close correspondence in representation exists between the brain and CNNs, the averaged correlation coefficients should be significantly above zero. All stats reported were from one-tailed t tests. One-tailed t tests were used here as only values above zero were meaningful. In addition, all stats reported were corrected for multiple comparisons for the number of comparisons included in each experiment using the Benjamini–Hochberg procedure with the false-discovery rate (FDR) controlled at q  = 0.05 34 .

To assess how successfully the category RDM from a CNN layer could capture the RDM from a brain region, we first obtained the reliability of the category RDM in a brain region across the group of human participants by calculating the lower and upper bounds of the noise ceiling of the fMRI data following the procedure described by Nili et al. 33 . Specifically, the upper bound of the noise ceiling for a brain region was established by taking the average of the correlations between each participant’s RDM and the group average RDM including all participants, whereas the lower bound of the noise ceiling for a brain region was established by taking the average of the correlations between each participant’s RDM and the group average RDM excluding that participant.

To evaluate the degree to which CNN category RDMs may capture those of the different brain regions, for each CNN, using one-tailed t tests, we examined how close the highest correlation between a CNN layer and a brain region was to the lower bound of the noise ceiling of that brain region. These correlation results are reported in Supplementary Figs.  6 , 7 , and 16 . To transform these correlation results into the proportion of explainable brain RDM variance captured by the CNN, we divided the brain–CNN RDM correlation by the corresponding lower bound of the noise ceiling and then squared the resulting value. We evaluated whether a CNN could fully capture the RDM variance of a brain region by testing the difference between 1 and the highest proportion of variance captured by the CNN using one-tailed t tests. One-tailed t tests were used as only testing values below the lower bound of the noise ceiling (for measuring correlation values) or below 1 (for measuring the amount of variance captured) were meaningful here. The t test results were corrected for multiple comparisons for the six brain regions included using the Benjamini–Hochberg procedure at q  = 0.05. If a CNN layer was able to fully capture the representational structure of a brain region, then its RDM correlation with the brain region should exceed the lower bound of the noise ceiling of that brain region, and the proportion of variance explained should not differ from 1. Because the lower bound of the noise ceiling varied somewhat among the different brain regions, for illustration purposes, in Supplementary Figs.  6 , 7 , 16 , and 19 , we plotted the lower bound of the noise ceiling from all brain regions at 0.7 while maintaining the differences between the CNN and brain correlations with respect to their lower bound noise ceilings (i.e., by subtracting the difference between the actual noise ceiling and 0.7 from each brain–CNN correlation value). This did not affect any statistical test results.

To directly visualize the object representational structures in different brain regions and CNN layers, using classical multidimensional scaling, we placed the category RDMs onto 2D spaces with the distances among the categories approximating their relative similarities to each other. The same scaling factor was used to plot the MDS plot for each sampled layer of each CNN. Thus the distance among the categories may be directly compared across the different sampled layers of a given CNN and across CNNs. The scaling factor was doubled for the brain MDS plots for Experiments 1 and 3 and was quadrupled for Experiment 2 to allow better visibility of the different categories in each plot. Thus the distance among the categories may still be directly compared across the different brain regions within a given experiment and between Experiments 1 and 3. Since rotations and flips preserve distances on these MDS plots, to make these plots more informative and to see how the representational structure evolved across brain regions and CNN layers, we manually rotated and/or flipped each MDS plot when necessary. In some cases, to maintain consistency across plots, we arbitrarily picked a few categories as our anchor points and then rotated and/or flipped the MDS plots accordingly.

Reporting summary

Further information on research design is available in the  Nature Research Reporting Summary linked to this article.

Data availability

Data supporting the findings of this study are available at https://osf.io/tsz47/ .  Source data are provided with this paper.

Code availability

Standard code from the listed software was used. No special code was developed for this study.

Change history

06 may 2021.

A Correction to this paper has been published: https://doi.org/10.1038/s41467-021-23110-2

Kriegeskorte, N. Deep neural networks: a new framework for modeling biological vision and brain information processing. Annu. Rev. Vis. Sci. 1 , 417–446 (2015).

Article   PubMed   Google Scholar  

Rajalingham, R. et al. Large-scale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks. J. Neurosci. 38 , 7255–7269 (2018).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Serre, T. Deep learning: the good, the bad, and the ugly. Annu. Rev. Vis. Sci. 5 , 21.1–21.28 (2019).

Article   Google Scholar  

Yamins, D. L. K. & DiCarlo, J. J. Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci. 19 , 356–365 (2016).

Article   CAS   PubMed   Google Scholar  

Cichy, R. M., Khosla, A., Pantazis, D., Torralba, A. & Oliva, A. Comparison of deep neural networks to spatiotemporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Sci. Rep. 6 , 27755 (2016).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Eickenberg, M., Gramfort, A., Varoquaux, G. & Thirion, B. Seeing it all: convolutional network layers map the function of the human visual system. NeuroImage 152 , 184–194 (2017).

Güçlü, U. & van Gerven, M. A. J. Increasingly complex representations of natural movies across the dorsal stream are shared between subjects. NeuroImage 145 , 329–336 (2017).

Khaligh-Razavi, S.-M. & Kriegeskorte, N. Deep supervised, but not unsupervised, models may explain IT cortical representation. PLOS Comput. Biol. 10 , e1003915 (2014).

Article   PubMed   PubMed Central   Google Scholar  

Yamins, D. L. K. et al. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Natl Acad. Sci. USA 111 , 8619–8624 (2014).

Cadieu, C. F. et al. Deep neural networks rival the representation of primate IT cortex for core visual object recognition. PLOS Comput. Biol. 10 , e1003963 (2014).

Cichy, R. M. & Kaiser, D. Deep neural networks as scientific models. Trends Cogn. Sci. 23 , 305–317 (2019).

Kubilius, J., et al. Brain-like object recognition with high-performing shallow recurrent ANNs. in Advances in Neural Information Processing Systems, 32, NeurIPS Proceedings . (2019).

Long, B. & Konkle, T. Mid-level visual features underlie the high-level categorical organization of the ventral stream. Proc. Natl Acad. Sci. USA 115 , E9015–E9024 (2018).

Bracci, S., Ritchie, J. B., Kalfas, I. & Op de Beeck, H. P. The ventral visual pathway represents animal appearance over animacy, unlike human behavior and deep neural networks. J. Neurosci. 39 , 6513–6525 (2019).

King, M. L., Groen, I. I. A., Steel, A., Kravitz, D. J. & Baker, C. I. Similarity judgments and cortical visual responses reflect different properties of object and scene categories in naturalistic images. NeuroImage 197 , 368–382 (2019).

Kriegeskorte, N. & Kievit, R. A. Representational geometry: integrating cognition, computation, and the brain. Trends Cogn. Sci. 17 , 401–412 (2013).

Storrs, K. R., Khaligh-Razavi, S.-M. & Kriegeskorte, N. Noise ceiling on the cross validated performance of reweighted models of representational dissimilarity: Addendum to Khaligh-Razavi & Kriegeskorte (2014). Preprint at bioRxiv https://doi.org/10.1101/2020.03.23.003046 (2020).

Bao, P., She, L., McGill, M. & Tsao, D. Y. A map of object space in primate inferotemporal cortex. Nature 583 , 103–108 (2020).

Kar, K., Kubilius, J., Schmidt, K., Issa, E. B. & DiCarlo, J. J. Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior. Nat. Neurosci. 22 , 974–983 (2019).

Xu, Y. Comparing visual object representations in the human brain and convolutional neural networks. https://doi.org/10.17605/OSF.IO/TSZ47 (2021).

Vaziri-Pashkam, M. & Xu, Y. An information-driven two-pathway characterization of occipito-temporal and posterior parietal visual object representations. Cereb. Cortex 29 , 2034–2050 (2019).

Vaziri-Pashkam, M., Taylor, J. & Xu, Y. Spatial frequency tolerant visual object representations in the human ventral and dorsal visual processing pathways. J. Cogn. Neurosci. 31 , 49–63 (2019).

Willenbockel, V. et al. Controlling low-level image properties: the SHINE toolbox. Behav. Res. Methods 42 , 671–684 (2010).

Op de Beeck, H. P., Torfs, K. & Wagemans, J. Perceived shape similarity among unfamiliar objects and the organization of the human object vision pathway. J. Neurosci. 28 , 10111–10123 (2008).

Orban, G. A., Van Essen, D. & Vanduffel, W. Comparative mapping of higher visual areas in monkeys and humans. Trends Cogn. Sci. 8 , 315–324 (2004).

Grill-Spector, K., Kushnir, T., Hendler, T. & Malach, R. The dynamics of object-selective activation correlate with recognition performance in humans. Nat. Neurosci. 3 , 837–843 (2000).

Williams, M. A., Dang, S. & Kanwisher, N. G. Only some spatial patterns of fMRI response are read out in task performance. Nat. Neurosci. 10 , 685–686 (2007).

Farah, M. J. Visual Agnosia . (MIT Press, Cambridge, Mass, 2004).

Goodale, M. A., Milner, A. D., Jakobson, L. S. & Carey, D. P. A neurological dissociation between perceiving objects and grasping them. Nature 349 , 154–156 (1991).

Article   ADS   CAS   PubMed   Google Scholar  

Deng, J., et al. ImageNet: a largescale hierarchical image database. in Proc. IEEE conference on computer vision and pattern recognition (CVPR) 248–255 (2009).

Geirhos, R., et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. in Proc. International Conference on Learning Representations (2019).

O’Connell, T. P. & Chun, M. M. Predicting eye movement patterns from fMRI responses to natural scenes. Nat. Commun. 9 , 5159 (2018).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Nili, H. et al. A toolbox for representational similarity analysis. PLOS Comput. Biol. 10 , e1003553 (2014).

Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate—a practical and powerful approach to multiple testing. J. R. Stat. Soc. B Methods 57 , 289–300 (1995).

MathSciNet   MATH   Google Scholar  

Shepard, R. N. Multidimensional scaling, tree-fitting, and clustering. Science 210 , 390–398 (1980).

Hubel, D. H. Eye, Brain, and Vision . (WH Freeman, New York, 1988).

von der Heydt, R. Form analysis in visual cortex. in The Cognitive Neurosciences (ed Gazzaniga M. S.), 365–382. (MIT Press, Cambridge, Mass, 1994).

Kourtzi, Z. & Connor, C. E. Neural representations for object perception: structure, category, and adaptive coding. Annu. Rev. Neurosci. 34 , 45–67 (2011).

Tanaka, K. Columns for complex visual object features in the inferotemporal cortex: clustering of cells with similar but slightly different stimulus selectivities. Cereb. Cortex 13 , 90–99 (2003).

Kubilius, J., Bracci, S. & Op de Beeck, H. P. Deep neural networks as a computational model for human shape sensitivity. PLOS Comput. Biol. 12 , e1004896 (2016).

LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521 , 436–444 (2015).

Gatys, L. A., Ecker, A. S. & Bethge, M. Texture and art with deep neural networks. Curr. Opin. Neurobiol. 46 , 178–186 (2017).

Ballester, P. & de Araújo, R. M. On the Performance of GoogLeNet and AlexNet Applied to Sketches. in AAAI 1124–1128 (2016).

Baker, N., Lu, H., Erlikhman, G. & Kellman, P. J. Deep convolutional networks do not classify based on global object shape. PLOS Comput. Biol. 14 , e1006613 (2018).

Cichy, R. M., Chen, Y. & Haynes, J. D. Encoding the identity and location of objects in human LOC. Neuroimage 54 , 2297–2307 (2011).

Hung, C. P., Kreiman, G., Poggio, T. & DiCarlo, J. J. Fast readout of object identity from macaque inferior temporal cortex. Science 310 , 863–866 (2005).

Hong, H., Yamins, D. L. K., Majaj, N. J. & DiCarlo, J. J. Explicit information for category-orthogonal object properties increases along the ventral stream. Nat. Neurosci. 19 , 613–622 (2016).

Rice, G. E., Watson, D. M., Hartley, T. & Andrews, T. J. Low-level image properties of visual objects predict patterns of neural response across category selective regions of the ventral visual pathway. J. Neurosci. 34 , 8837–8844 (2014).

Kietzmann, T. et al. Recurrence required to capture the dynamic computations of the human ventral visual stream. Proc. Natl Acad. Sci. USA 116 , 21854–21863 (2019).

Khaligh-Razavi, S.-M.., Henriksson, L., Kay, K. & Kriegeskorte, N. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models. J. Math. Psychol. 76 , 184–197 (2017).

Kay, K. N., Naselaris, T., Prenger, R. J. & Gallant, J. L. Identifying natural images from human brain activity. Nature 452 , 352–355 (2008).

Geirhos, R., et al. Generalisation in humans and deep neural networks. in Advances in Neural Information Processing Systems 31, (ed S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett), 7549–7561. (Curran Assoc., Red Hook, NY, 2018).

Biederman, I. Recognition-by-components: a theory of human image understanding. Psychol. Rev. 94 , 115–147 (1987).

Xu, Y. & Vaziri-Pashkam, M. The development of transformation tolerant visual representations differs between the human brain and convolutional neural networks. Preprint at bioRxiv https://doi.org/10.1101/2020.08.11.246934 (2020a).

Xu, Y. & Vaziri-Pashkam, M. The coding of object identity and nonidentity features in human occipito-temporal cortex and convolutional neural networks. J. Neurosci. https://doi.org/10.1101/2020.08.11.246967 . (In press).

Kay, K. N. Principles for models of neural information processing. NeuroImage 180 , 101–109 (2018).

Haxby, J. V. et al. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293 , 2425–2430 (2001).

Kamitani, Y. & Tong, F. Decoding the visual and subjective contents of the human brain. Nat. Neurosci. 8 , 679–685 (2005).

Dale, A. M., Fischl, B. & Sereno, M. I. Cortical surface-based analysis. I. Segmentation and surface reconstruction. Neuroimage 9 , 179–194 (1999).

Vaziri-Pashkam, M. & Xu, Y. Goal-directed visual processing differentially impacts human ventral and dorsal visual representations. J. Neurosci. 37 , 8767–8782 (2017).

Xu, Y. & Vaziri-Pashkam, M. Task modulation of the 2-pathway characterization of occipitotemporal and posterior parietal visual object representations. Neuropsychologia 132 , 107140 (2019).

Xu, Y. A tale of two visual systems: invariant and adaptive visual information representations in the primate brain. Annu. Rev. Vis. Sci. 4 , 311–336 (2018).

Sereno, M. I. et al. Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science 268 , 889–893 (1995).

Swisher, J. D., Halko, M. A., Merabet, L. B., McMains, S. A. & Somers, D. C. Visual topography of human intraparietal sulcus. J. Neurosci. 27 , 5326–5337 (2007).

Bettencourt, K. C. & Xu, Y. Understanding location- and feature-based processing along the human intraparietal sulcus. J. Neurophysiol. 116 , 1488–1497 (2016).

Kourtzi, Z. & Kanwisher, N. Cortical regions involved in perceiving object shape. J. Neurosci. 20 , 3310–3318 (2000).

Grill‐Spector, K. et al. A sequence of object‐processing stages revealed by fMRI in the human occipital lobe. Hum. Brain Mapp. 6 , 316–328 (1998).

Malach, R. et al. Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proc. Natl Acad. Sci. USA 92 , 8135–8139 (1995).

Tarhan, L. & Konkle, T. Reliability-based voxel selection. Neuroimage 207 , 116350 (2020).

Download references

Acknowledgements

We thank Martin Schrimpf for help implementing CORnet-S, JohnMark Tayler for extracting the features from the three Resnet-50 models trained with the stylized images, and Thomas O’Connell, Brian Scholl, JohnMark Taylor, and Nick Turk-Brown for helpful discussions and feedback on the results. The project is supported by NIH grants 1R01EY022355 and 1R01EY030854 to Y.X. M.V.P. was supported in part by NIH Intramural Research Program ZIA MH002035.

Author information

Authors and affiliations.

Psychology Department, Yale University, New Haven, CT, USA

Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA

Maryam Vaziri-Pashkam

You can also search for this author in PubMed   Google Scholar

Contributions

The fMRI data used here were from two prior publications, with M.V.-P. and Y.X. designing the fMRI experiments and M.V.-P. collecting and analyzing the fMRI data. Y.X. conceptualized the present study and performed the brain–CNN correlation analyses reported here. Y.X. wrote the paper with comments from M.V.-P.

Corresponding author

Correspondence to Yaoda Xu .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Peer review information Nature Communications thanks Mark Lescroart and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer review reports are available.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplemenatary information, peer review file, reporting summary, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Xu, Y., Vaziri-Pashkam, M. Limits to visual representational correspondence between convolutional neural networks and the human brain. Nat Commun 12 , 2065 (2021). https://doi.org/10.1038/s41467-021-22244-7

Download citation

Received : 30 March 2020

Accepted : 05 March 2021

Published : 06 April 2021

DOI : https://doi.org/10.1038/s41467-021-22244-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Improved modeling of human vision by incorporating robustness to blur in convolutional neural networks.

Nature Communications (2024)

Deep convolutional neural networks are not mechanistic explanations of object recognition

  • Bojana Grujičić

Synthese (2024)

On the ability of standard and brain-constrained deep neural networks to support cognitive superposition: a position paper

  • Max Garagnani

Cognitive Neurodynamics (2024)

Multiple visual objects are represented differently in the human brain and convolutional neural networks

  • Su Keun Jeong

Scientific Reports (2023)

Denoised Internal Models: A Brain-inspired Autoencoder Against Adversarial Attacks

  • Kai-Yuan Liu

Machine Intelligence Research (2022)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

visual representation weight loss comparison to objects

Book cover

International Conference on Medical Image Computing and Computer-Assisted Intervention

MICCAI 2020: Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 pp 523–532 Cite as

Universal Loss Reweighting to Balance Lesion Size Inequality in 3D Medical Image Segmentation

  • Boris Shirokikh 16 , 17 , 18 ,
  • Alexey Shevtsov 17 , 18 ,
  • Anvar Kurmukov 17 , 19 ,
  • Alexandra Dalechina 20 ,
  • Egor Krivov 17 , 18 ,
  • Valery Kostjuchenko 20 ,
  • Andrey Golanov 21 &
  • Mikhail Belyaev 16  
  • Conference paper
  • First Online: 29 September 2020

12k Accesses

5 Citations

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12264))

Target imbalance affects the performance of recent deep learning methods in many medical image segmentation tasks. It is a twofold problem: class imbalance – positive class (lesion) size compared to negative class (non-lesion) size; lesion size imbalance – large lesions overshadows small ones (in the case of multiple lesions per image). While the former was addressed in multiple works, the latter lacks investigation. We propose a loss reweighting approach to increase the ability of the network to detect small lesions. During the learning process, we assign a weight to every image voxel. The assigned weights are inversely proportional to the lesion volume, thus smaller lesions get larger weights. We report the benefit from our method for well-known loss functions, including Dice Loss, Focal Loss, and Asymmetric Similarity Loss. Additionally, we compare our results with other reweighting techniques: Weighted Cross-Entropy and Generalized Dice Loss. Our experiments show that inverse weighting considerably increases the detection quality, while preserves the delineation quality on a state-of-the-art level. We publish a complete experimental pipeline ( https://github.com/neuro-ml/inverse_weighting ) for two publicly available datasets of CT images: LiTS and LUNA16. We also show results on a private database of MR images for the task of multiple brain metastases delineation.

  • Segmentation
  • Lung nodules
  • Brain metastases

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Armato III, S.G., et al.: The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans. Med. Phys. 38 (2), 915–931 (2011)

Article   Google Scholar  

Bakas, S., et al.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. arXiv preprint arXiv:1811.02629 (2018)

Bankier, A.A., MacMahon, H., Goo, J.M., Rubin, G.D., Schaefer-Prokop, C.M., Naidich, D.P.: Recommendations for measuring pulmonary nodules at CT: a statement from the fleischner society. Radiology 285 (2), 584–600 (2017)

Bilic, P., et al.: The liver tumor segmentation benchmark (LiTS). arXiv preprint arXiv:1901.04056 (2019)

Brosch, T., Yoo, Y., Tang, L.Y.W., Li, D.K.B., Traboulsee, A., Tam, R.: Deep convolutional encoder networks for multiple sclerosis lesion segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 3–11. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_1

Chapter   Google Scholar  

Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

DeLuca, P., Wambersie, A., Whitmore, G.: Extensions to conventional ROC methodology: LROC, FROC, and AFROC. J ICRU 8 (1), 31–5 (2008)

Google Scholar  

Hashemi, S.R., Salehi, S.S.M., Erdogmus, D., Prabhu, S.P., Warfield, S.K., Gholipour, A.: Asymmetric loss functions and deep densely-connected networks for highly-imbalanced medical image segmentation: application to multiple sclerosis lesion detection. IEEE Access 7 , 1721–1735 (2018)

Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H.: No new-net. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 234–244. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11726-9_21

Jacobs, C., Setio, A.A.A., Traverso, A., van Ginneken, B.: Lung nodule analysis 2016 (2016). https://luna16.grand-challenge.org

Li, Z., Kamnitsas, K., Glocker, B.: Overfitting of neural nets under class imbalance: analysis and improvements for segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 402–410. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_45

Lin, N.U., et al.: Response assessment criteria for brain metastases: proposal from the rano group. Lancet Oncol. 16 (6), e270–e278 (2015)

Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42 , 60–88 (2017)

Maier, O., et al.: Isles 2015-a public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRID. Med. Image Anal. 35 , 250–269 (2017)

Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34 (10), 1993–2024 (2014)

Milletari, F., Navab, N., Ahmadi, S.A.: V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D vision (3DV), pp. 565–571. IEEE (2016)

Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

Sudre, C.H., Li, W., Vercauteren, T., Ourselin, S., Jorge Cardoso, M.: Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 240–248. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67558-9_28

Van Ginneken, B., et al.: Comparing and combining algorithms for computer-aided detection of pulmonary nodules in computed tomography scans: the ANODE09 study. Med. Image Anal. 14 (6), 707–722 (2010)

Wong, K.C.L., Moradi, M., Tang, H., Syeda-Mahmood, T.: 3D segmentation with exponential logarithmic loss for highly unbalanced object sizes. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11072, pp. 612–619. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00931-1_70

Download references

Acknowledgements

The results of the paper are based on the scientific research supported by the Russian Science Foundation under grant 17-11-01390. The authors also acknowledge the National Cancer Institute and the Foundation for the National Institutes of Health, and their critical role in the creation of the free publicly available LIDC/IDRI Database used in this study.

Author information

Authors and affiliations.

Skolkovo Institute of Science and Technology, Moscow, Russia

Boris Shirokikh & Mikhail Belyaev

Kharkevich Institute for Information Transmission Problems, Moscow, Russia

Boris Shirokikh, Alexey Shevtsov, Anvar Kurmukov & Egor Krivov

Moscow Institute of Physics and Technology, Moscow, Russia

Boris Shirokikh, Alexey Shevtsov & Egor Krivov

Higher School of Economics, Moscow, Russia

Anvar Kurmukov

Moscow Gamma-Knife Center, Moscow, Russia

Alexandra Dalechina & Valery Kostjuchenko

Burdenko Neurosurgery Institute, Moscow, Russia

Andrey Golanov

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Boris Shirokikh .

Editor information

Editors and affiliations.

University of Toronto, Toronto, ON, Canada

Anne L. Martel

The University of British Columbia, Vancouver, BC, Canada

Purang Abolmaesumi

University College London, London, UK

Danail Stoyanov

École Centrale de Nantes, Nantes, France

Diana Mateus

EURECOM, Biot, France

Maria A. Zuluaga

Chinese Academy of Sciences, Beijing, China

S. Kevin Zhou

Sorbonne University, Paris, France

Daniel Racoceanu

The Hebrew University of Jerusalem, Jerusalem, Israel

Leo Joskowicz

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 380 KB)

Rights and permissions.

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Cite this paper.

Shirokikh, B. et al. (2020). Universal Loss Reweighting to Balance Lesion Size Inequality in 3D Medical Image Segmentation. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12264. Springer, Cham. https://doi.org/10.1007/978-3-030-59719-1_51

Download citation

DOI : https://doi.org/10.1007/978-3-030-59719-1_51

Published : 29 September 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-59718-4

Online ISBN : 978-3-030-59719-1

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

Societies and partnerships

The Medical Image Computing and Computer Assisted Intervention Society

  • Find a journal
  • Track your research

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Multimodal Body Representation of Obese Children and Adolescents before and after Weight-Loss Treatment in Comparison to Normal-Weight Children

Affiliations Department of Psychosomatic Medicine and Psychotherapy, University Medical Hospital, Tübingen, Germany, Max Planck Institute for Biological Cybernetics, Tübingen, Germany, Graduate Training Centre of Neuroscience, International Max Planck Research School, University Tübingen, Tübingen, Germany

ORCID logo

Affiliation Department of Psychosomatic Medicine and Psychotherapy, University Medical Hospital, Tübingen, Germany

Affiliation Fachkliniken Wangen i.A., Children Rehabilitation Hospital for Respiratory Diseases, Allergies and Psychosomatics, Wangen i.A., Germany

* E-mail: [email protected]

  • Simone Claire Mölbert, 
  • Helene Sauer, 
  • Dirk Dammann, 
  • Stephan Zipfel, 
  • Martin Teufel, 
  • Florian Junne, 
  • Paul Enck, 
  • Katrin Elisabeth Giel, 
  • Isabelle Mack

PLOS

  • Published: November 22, 2016
  • https://doi.org/10.1371/journal.pone.0166826
  • Reader Comments

Table 1

The aim of the study was to investigate whether obese children and adolescents have a disturbed body representation as compared to normal-weight participants matched for age and gender and whether their body representation changes in the course of an inpatient weight-reduction program.

Sixty obese (OBE) and 27 normal-weight (NW) children and adolescents (age: 9–17) were assessed for body representation using a multi-method approach. Therefore, we assessed body size estimation, tactile size estimation, heartbeat detection accuracy, and attitudes towards one’s own body. OBE were examined upon admission and before discharge of an inpatient weight-reduction program. NW served as cross-sectional control group.

Body size estimation and heartbeat detection accuracy were similar in OBE and NW. OBE overestimated sizes in tactile size estimation and were more dissatisfied with their body as compared to NW. In OBE but not in NW, several measures of body size estimation correlated with negative body evaluation. After weight-loss treatment, OBE had improved in heartbeat detection accuracy and were less dissatisfied with their body. None of the assessed variables predicted weight-loss success.

Conclusions

Although OBE children and adolescents generally perceived their body size and internal status of the body accurately, weight reduction improved their heartbeat detection accuracy and body dissatisfaction.

Citation: Mölbert SC, Sauer H, Dammann D, Zipfel S, Teufel M, Junne F, et al. (2016) Multimodal Body Representation of Obese Children and Adolescents before and after Weight-Loss Treatment in Comparison to Normal-Weight Children. PLoS ONE 11(11): e0166826. https://doi.org/10.1371/journal.pone.0166826

Editor: Andreas Stengel, Charité-Universitätsmedizin Berlin, Campus Benjamin Franklin, GERMANY

Received: September 21, 2016; Accepted: November 5, 2016; Published: November 22, 2016

Copyright: © 2016 Mölbert et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper. Data are available from the corresponding author for researchers that meet the criteria for access to confidential data.

Funding: This study was funded by the Else Kröner-Fresenius-Stiftung, http://www.ekfs.de/ (grant 2011_A135). The EKFS had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors acknowledge support by the Deutsche Forschungsgemeinschaft and the Open Access Publishing Fund of Tübingen University.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Childhood obesity is increasing worldwide, and it is associated with both psychosocial and medical complications [ 1 , 2 ]. Awareness of the problem and motivation are considered a key factor in changing health behavior [ 3 , 4 ]. In this sense, it has been suggested that a lack of awareness of the own body size or indifference towards own weight status contribute to overweight, as they hamper motivation for weight loss [ 5 – 7 ]. In addition, it has been suggested that a disturbed interoceptive processing, as indicated by poor heartbeat detection accuracy, might contribute to an excessive food intake [ 8 , 9 ]. As yet, no study has comprehensively investigated different types of body representation in obese children and adolescents. It is still unclear whether obese children and adolescents really have a disturbed body representation and whether weight loss also involves changes in body representation that could be addressed in weight loss treatment.

Body representation is not uniform, but a conglomerate of multiple body representations that are informed by different modalities [ 10 , 11 ]. In this notion, body representation comprises not only attitudes about body weight and shape, but also a mental picture of one’s own body and implicit representations informed by proprioception, somatosensation and interoception. It is assumed that different body representations are organized along a continuum between implicit and explicit representations [ 12 ].

Studies on childhood obesity have typically focused on explicit body representation only and mostly used cross-sectional designs. It has been shown that a significant proportion of overweight children tend to underestimate their current body size in figure rating scales [ 13 , 14 ], though this has not been replicated in adults when using methods that have a less social focus [ 15 ]. Also, children and adolescents with a high body mass index (BMI; kg/m 2 ) tend to have high body dissatisfaction and low self-esteem, contradicting the idea that the problem of obesity is often ignored [ 1 , 13 , 16 ]. Longitudinal studies, suggested that both body dissatisfaction and body size estimation in figure rating tasks approach performance of normal-weight children when overweight is reduced [ 17 – 19 ].

Recently, more implicit types of body representation came into the focus of obesity research and reopened the debate again. Heartbeat detection accuracy has been observed to be diminished in adults with high BMI [ 20 ] and it is associated with healthier eating behavior and better physical fitness in children [ 9 , 21 ]. Also, studies in adults indicate that participants with high BMI might have difficulties in estimating the size of an object touching their skin (tactile size estimation), possibly reflecting a disturbed sense of the own size [ 22 – 24 ]. Taken together, while the observed disturbances in explicit measures of body representation could be interpreted as effect of social teasing and stigmatization, implicit measures suggested that an inaccurate representation of the own body size could still play a role in obesity.

In the present study, we wanted to obtain a more comprehensive picture of a possible body image disturbance in childhood obesity than previously reported. Specifically, we aimed to find out i) whether and in which measures of body representation obese children and adolescents differ from normal-weight mates matched for age and gender and heartbeat detection ii) how the different measures of body representation are associated with each other in both groups. In addition, we followed up obese children and adolescents until discharge from a weight-loss therapy, as we wanted to investigate iii) whether weight loss induced any changes in the body representation of obese children and adolescents. Finally, we also wanted to test iv) whether any of the body representation measures would serve as a predictor for weight loss success in obese children, as suggested by health behavior theories [ 3 , 4 ].

Materials and Methods

Study design and participants.

The study presented here was conducted as part of the DROMLIN-study (PreDictor Research in Obesity during Medical care—weight Loss in children and adolescents during an INpatient rehabilitation) [ 25 ]. The study protocol was approved by the Ethics Committee of the medical faculty for the University Tübingen, Germany. This study is registered at the German Clinical Trials Register (DRKS) with the clinical trial number DRKS00005122.

Children and parents were informed about the study purpose and provided written consent prior to inclusion. In short, 60 overweight and obese children (OBE; age 9–17, 47% male) with a BMI over the 90 th percentile for their age and sex specific norms [ 26 ] and an indication for hospitalization for weight loss intervention were included. All OBE participated in a weight loss program at the Children Rehabilitation Hospital for Respiratory Diseases, Allergies and Psychosomatics in Wangen i.A., Germany. The program comprised physical activity, cognitive behavioral therapy, and a balanced diet. A detailed description of the setting is reported elsewhere [ 25 ]. Exclusion criteria were severe psychological comorbidities, linguistic or intellectual limitations, type-1 diabetes, malignant tumors, systemic disorders, or severe cardiovascular diseases. Additionally, 27 normal weight children (NW; 11–14 years, 56% male) matched for age and gender with a BMI between the 10 th to 90 th BMI percentile from the surrounding area of the University Hospital Tübingen, Germany were recruited and served as control group.

OBE were tested twice, upon admission (T1) and prior to discharge (T2). The anthropometric and body perception assessments took place in an afternoon session and the heartbeat perception assessment in the morning session. NW were tested once in a single session, and served as control group for T1.

Assessments

Anthropometry..

The physical development of the children was assessed using the tanner stages [ 27 , 28 ]. The tanner scale ranges between 1 (prepubertal) and 4 (mature). In the context of anthropometric measurements, the actual body widths (spine, hip, thigh, upper arm) and body depths (abdomen, buttocks), were measured with a caliper and body circumferences (abdomen, buttocks, thigh, upper arm) with a tape measure, respectively in the morning. Participants were not informed about their body dimensions.

Body size estimation.

Two hours after the anthropometry, the same investigator assessed the corresponding body size estimations by instructing the participants to set their dimensions by moving sliders on a 2 m wooden slat. Then, the investigator measured the adjusted dimension without providing any feedback. At the beginning of each trial, the investigator placed both sliders in the middle of the slat. The children’s cognitive ability to discriminate between physical dimensions was tested by presenting everyday objects of different size that had to be estimated: a mobile phone (9 cm), a book (24 cm), and a bottle (34 cm). After each presentation, the object was removed and the participant was asked to set its length on the wooden slat.

Tactile size estimation.

We conducted a tactile size estimation test similar to the one reported by Keizer et al. [ 29 ] at four different body sites (upper spine, upper arm, buttocks, thigh). The participants were blindfolded and the investigator pressed a small caliper/pair of compasses with predefined distances on different body sites. After each tactile stimulation (each distance and body site), the blindfold was removed and the participants had to reproduce the perceived distance using the wooden slat. The distances between the two points were as follows: spine– 20 cm, upper arm– 10 cm, buttocks– 15 cm, thigh– 10 cm.

Perception indices and scores.

A perception index for each body size and tactile size estimate was calculated according to the formula: perception index = (estimated / actual body size) x 100 [ 30 ]. Next, mean perception scores for each group were calculated as average of the single measures for everyday objects (mobile phone, book, bottle), body width (spine, hip, thigh, upper arm), body depth (abdomen, buttocks), body circumference (abdomen, buttocks, thigh, upper arm) and tactile size estimation (spine, upper arm, buttocks, thigh). Values below 100 indicate an underestimation and values above 100 indicate an overestimation in terms of percent of the actual size.

Heartbeat detection.

The heartbeat detection task was performed as reported previously by Pollatos and Schandry [ 31 ] in a modified version. During the procedure, a conventional ECG (3991/3-GPP BioLog, UFI Company, Morro Bay, CA) recorded the actual cardiac activity while the child was comfortably seated in a chair and was not allowed to speak and to move. A short test interval of 15 seconds was followed by four intervals of 25, 35, 45 and 55 seconds. Between the intervals were resting periods of 30 seconds. The children were instructed to count during each interval their own heartbeats by concentrating on their heart activity. The procedure was standardized by giving the instructions from a tape. A heartbeat detection index for every interval was calculated by the following formula: 1-(|recorded heartbeats–counted heartbeats|/recorded heartbeats). Next, the mean heartbeat detection score was calculated as average of the heartbeat detection indices of the four intervals 25s, 35s, 45s and 55s. The maximum score is 1, the minimum score is 0. A high index or score indicates a good concordance between the detected and actual heartbeat whereas a low score indicates a poor agreement between the detected and the actual heartbeat.

Concerns about body weight and shape.

Eating behavior and concerns about body weight and shape were assessed with the validated Eating Behaviour and Weight Problems Inventory for Children (EWI-C), consisting of 60 items and 10 subscales [ 32 ]. In this study, the subscales “figure dissatisfaction” (consisting of 5 items), and “concerns about eating” (consisting of 8 items) are reported. Percentile ranks for the values of the subscales were retrieved by sex and age specific norm tables. Values between the 16 th and 84 th percentile can be considered as normal.

Statistical analyses

The data were analyzed using SPSS version 19. Normally distributed data are presented as mean±standard deviation. Non-normally distributed data are presented as median [interquartile range] and the perception indices additionally by mean±standard deviation. Differences between OBE T1 and NW were calculated using unpaired t-tests (age, weight, height, BMI-SDS), Chi 2 test (sex) or Mann-Whitney-U-tests if data were not normally distributed (EWI-C, perception indices). Differences between OBE T1 and OBE T2 were analyzed with paired t-test (weight, BMI-SDS) or Wilcoxon signed-rank test if data were not normally distributed (EWI-C, perception indices). We used Spearman correlations to analyze associations between variables, because in all analyzed pairs, at least one variable was not normally distributed. In order to analyze the association between body representation distortion and successful weight loss in OBE, Spearman correlations between the T1 perception scores and the delta BMI-SDS were calculated. The same Spearman correlations were computed using the T1 absolute values of mis-estimation instead of the T1 perception scores. Spearman correlations were computed for correlation analyses between all perception scores and EWI-subscales. In order to control for multiple testing the p-values were false discovery rate (FDR) adjusted [ 33 ]. FDR-values of <0.05 and for correlation analyses <0.15 were considered as statistically significant.

Table 1 provides an overview on the characteristics of the study population. At T2, seven children had dropped out so that the longitudinal data refers to a sample of 53 obese children. The length of intervention in OBE was 38±10 (min-max: 16–70) days. To exclude possible age effects, all analyses were repeated excluding the four youngest children (aged 9 to 10 years from the OBE group), which however, did not influence the results. Similarly, we explored whether results would change if absolute values of percentage of mis-estimation instead of perception scores were used. Again, this was not the case.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0166826.t001

Group differences OBE T1 versus NW

As displayed in Table 2 , both groups overestimated their body widths and body depths while they underestimated their body circumferences. However, the perception indices of body widths, body depths and body circumferences for the different body sites did not differ significantly between OBE and NW. As a result, the corresponding three aggregated perception scores “Body widths”, “Body depths” and “Body circumferences” were also similar in OBE versus NW ( Fig 1 ) respectively. Both groups greatly overestimated the distances in the tactile size estimation task ( Table 2 ). However, the perception indices “Spine”, “Buttocks” and “Thigh” of this task differed significantly between OBE and NW, with OBE overestimating the distances more than NW (Spine: U(N = 87) = 419.5, FDR = 0.005); Buttocks: U(N = 87) = 498.5, FDR = 0.04; Thigh: U(N = 87) = 342.5, FDR<0.001; Table 2 ). Consequently, the aggregated perception score “Tactile Size Estimation” also differed between OBE and NW (U(N = 87) = 434.0, FDR = 0.007, d = 0.81; Fig 1 ). In order to exclude that either changes in mechanoreceptor density through growth or central nervous system maturation influenced tactile size estimation performance, we explored whether performance correlated with the children’s height and their tanner stages, which was not the case. The heartbeat detection indices for the different counting intervals were similar in OBE and NW in all intervals ( Table 2 ) and consequently also the aggregated detection accuracy score ( Fig 1 ). The results for the two subscales “Figure dissatisfaction” and “Concerns about eating” are presented in Table 1 . In both subscales OBE scored significantly higher than NW (“Concerns about eating”: U(N = 87) = 40.00, p<0.001, “Figure dissatisfaction”: U(N = 87) = 72.00, p<0.001), reflecting higher body dissatisfaction and eating concern in obese children.

thumbnail

The perception scores of everyday objects, body width, body depth, body circumference, tactile size estimation and heartbeat detection are displayed as box-whiskers with a cross, the latter depicting the mean. Except for the heartbeat detection score, values below 100 indicate an underestimation and values above 100 indicate an overestimation in terms of percent of the actual size. For the heartbeat detection, a score of 1 indicates absolute accuracy of heartbeat detection and the minimum score of 0 indicates that no heartbeat was perceived. The mean±standard deviation of the perception scores were as follows: Everyday objects –OBE: 103±08, NW: 103±07; Body width –OBE: 120±15, NW: 121±17; Body depth –OBE 115±20, NW: 115±17; Body circumference –OBE: 85±16, NW: 89±16; Tactile size estimation –OBE: 182±41, NW: 152±25; Heartbeat detection –OBE: 0.47±0.26, NW: 0.52±0.20. Due to multiple testing the p-values were false discovery rate (FDR)-adjusted. FDR values <0.05 were considered as statistically significant. ** indicates FDR<0.01.

https://doi.org/10.1371/journal.pone.0166826.g001

thumbnail

https://doi.org/10.1371/journal.pone.0166826.t002

Correlations between body representation measures

Correlations between all the perception scores and the subscales of the EWI-C were computed separately for OBE (at T1) and NW ( Table 3 ). In OBE, the perception score “body width” correlated weakly to moderately with the perception scores “body depths”, “tactile size estimation and the EWI-C scale “eating concern”. The perception score “body depth” correlated weakly with “tactile size estimation”. A moderate correlation was observed between the two EWI-C scales “eating concern” and “figure”.

thumbnail

https://doi.org/10.1371/journal.pone.0166826.t003

In NW, the perception score “body width” correlated moderately with “tactile size estimation”. Further, the perception score “body width” correlated with “body depth” and “body circumference”, but this finding did not withstand FDR-adjustment. In NW, the two EWI-C scales “eating concern” and “figure” correlated strongly with each other. Neither in OBE nor in NW, correlations between the heartbeat detection score and other perception scores or EWI-C scales were found.

Changes in OBE body representation between T1 and T2

In the OBE group, the weight loss treatment induced neither a significant effect with regard to any of the aggregated perception scores for “Body widths”, “Body depths”, Body circumferences” nor for the “Tactile Size Estimation” ( Fig 2A ). We also analyzed the individual changes in aggregated perception scores between T1 and T2 and found no trend of improvement or worsening ( Fig 2B ). In contrast, the heartbeat detection accuracy improved significantly in the course of weight loss from T1 to T2 in all examined intervals ( Table 2 ) and in the aggregated score (Z(N = 52) = -5.174, FDR<0.001, d = 0.67, Fig 2B ). Also, we observed that OBE improved in the EWI-C subscale “Concerns about eating” (Z(N = 53) = -2.81, p = 0.005).

thumbnail

A: The perception scores of everyday objects, body width, body depth, body circumference, tactile size estimation and heartbeat detection are displayed as box-whiskers with a cross, the latter depicting the mean. Except for the heartbeat detection score, values below 100 indicate an underestimation and values above 100 indicate an overestimation in terms of percent of the actual size. For the heartbeat detection, a score of 1 indicates absolute accuracy of heartbeat detection and the minimum score of 0 indicates that no heartbeat was perceived. The mean±standard deviation of the perception scores were as follows: Everyday objects –T1: 103±08, T2: 105±08; Body width –T1: 120±15, T2: 120±14; Body depth –T1: 115±20, T2: 119±16; Body circumference –T1: 85±16, T2: 85±12; Tactile size estimation –T1: 182±41, T2: 183±35; Heartbeat detection– T1: 0.47±0.26, T2: 0.63±0.21. Due to multiple testing the p-values were false discovery rate (FDR)-adjusted. FDR values <0.05 were considered as statistically significant. *** indicates FDR<0.001. B: The change values of the perception scores in OBE are presented as box-whiskers.

https://doi.org/10.1371/journal.pone.0166826.g002

Prediction of weight loss success

None of the assessed measures of body representation at T1 correlated with the weight loss success (delta BMI-SDS).

Our observations suggest that obese children and adolescents generally represent their bodies as accurate as normal weight age mates, though in OBE, body size representation was associated with eating concern. Our observation that none of the assessed variables predicted weight loss success is contradictory to ideas that a lack of awareness of their excess body size or poor interoception contributes to being overweight. However, we observed that in the obese children and adolescents, not only eating concern, but also heartbeat detection accuracy improved throughout weight loss, suggesting that the program induced improvements in interoceptive processing.

We observed no uniform group differences between OBE and NW with regard to their general body size perception and heartbeat detection accuracy, but only in tactile size estimation and body dissatisfaction. In the body size estimation task, our observations do not confirm previous results from figure rating tasks suggesting that obese children underestimate their size [ 13 , 14 , 34 ]. Rather, our observations match findings obtained with depictive methods in adults by [ 15 ] that showed no difference between obese and normal-weight participants. The discrepancy may be due to the fact that figure rating tasks assess own body size perception as compared to a certain social range, whereas metric body size estimation, as used in this this study, assesses body size estimation for the actual size. Several studies have already shown that families and peers of obese children often do not perceive the child as obese, which may be beneficial for the child’s quality of life [ 35 – 37 ]. It is likely that obese children do not see themselves as different from their peers as they are, and thus underestimate in figure rating tasks while they might be accurate in tasks that do not require a social comparison.

Our observation that OBE children were less accurate than NW children in tactile size estimation, is in line with previous findings in adults [ 24 ]. Interestingly, differing from these previous studies, we found that both groups overestimated, but OBE children did so to a significantly higher degree. Tactile size estimation performance did neither correlate with height nor with tanner stages of physical development. We therefore consider it unlikely that differences in growth and maturation processes might have caused the group differences, as previous studies suggested [ 38 , 39 ]. Rather, our results are similar to those found in anorexia nervosa suggesting that tactile size, despite being considered to assess implicit body representation, might be influenced top-down, e.g. by body dissatisfaction [ 24 ]. Hence, it might be the case that the overestimation of OBE children reflects their perception of being large although correlations between body dissatisfaction and tactile size estimation did not yield significance.

Heartbeat detection accuracy is assumed to be the central construct underpinning other interoceptive measures [ 40 ]. Further, it has been observed to be negatively correlated with the tendency to evaluate one’s body based on appearance rather than for its effectiveness [ 41 ]. Our observation of no group differences in heartbeat detection accuracy contradicts previous claims that a diminished perception of the inner status of the body might contribute to overweight [ 20 ]. Interestingly, in a large sample of children (n>1500) aged 6 to 11 years, no differences were observed between overweight and normal-weight children at a first assessment, whereas differences between the groups were evident one year later [ 9 ]. Our observations suggest that diminished heartbeat perception is likely not a general symptom of obesity. However, heartbeat perception is involved in weight regulation, possibly as a mediator for body-related cognitions.

Finally, our observation of high body dissatisfaction in OBE children confirm Wardle and Cooke [ 1 ],who identified high body dissatisfaction in OBE children as one of the main factors of their compromised psychological well-being. At the same time, our observation suggests that it is not the case that obese children have a lack of awareness of the problem but that they are aware of and suffering from their excess weight.

Group wise correlation analyses of the different measures of body representation revealed an interesting pattern: Whereas in the NW group questionnaire measures of eating concern and body dissatisfaction were independent from other body representations, high eating concern was associated with body size overestimation in the OBE group. This indicates that in obese children and adolescents, cognitions of being too fat are possibly internalized to a higher degree than previously assumed and thereby might influence body size estimation on a very basic level.

Interestingly, we found that different measures of body representation do not homogeneously correlate with each other. In both the OBE and the NW group, measures related to size estimations were correlated moderately with each other, but not with heartbeat detection accuracy. This supports the notion of body representation as conglomerate of multiple rather independent representations and emphasizes the necessity of a multi-method account.

Our third research question asked for the role of body representation in weight loss treatment. Although, heartbeat detection accuracy is unlikely to be involved in the etiology of overweight, our observation of improved heartbeat detection accuracy at T2 indicates that weight loss treatment affects interoception. From our data, it is unclear whether heartbeat perception accuracy is rather a marker or even a potential moderator variable for weight loss. A possible mechanism of this relationship includes physical fitness, which has been observed to be associated with heartbeat detection accuracy in high BMI children [ 21 ]. Alternatively, it could reflect changes in body image, as weight loss might reduce tendencies to evaluate oneself based on appearance [ 41 ]. However, the causal structure of this association is yet unknown.

In line with other studies, we also observed that body dissatisfaction, as reflected by the scale “eating concern” improved throughout the weight loss treatment [ 17 , 18 , 42 , 43 ]. It has to be noted that body dissatisfaction remained on a level higher than in the NW group even at the end of the program. However, this suggests that positive effects on psychological well-being apply as soon as weight loss starts.

With regard to the predictive power of body representation for weight loss success, we found that none of the investigated measures predicted weight loss success of OBE children. Opposed to widely used models of health behavior change, our results suggest that a lack of awareness and, consequently, motivation for weight loss, is not the main hurdle for weight loss in obesity [ 3 , 7 , 44 ]. However, we observed body dissatisfaction to be very sensitive to weight loss, suggesting that motivational variables might be relevant for therapy adherence and success.

It is a limiting factor of this study i) that we were not able to analyze body representation in longer follow-up intervals and ii) that our design does not allow us to disentangle whether the observed changes in body dissatisfaction and interoception were rather consequences or even actively contributing to weight regulation. Although we have not found an immediate link between weight and body representation, it is still possible that some of the body representations investigated here are associated with weight loss, weight loss maintenance or weight gain in a long-term perspective. Studies with a longer follow-up interval and more measurement time intervals could help to clarify this question and to learn more about the mechanisms through which body representation and weight regulation interact.

There are also several strengths to our study. To our knowledge, we are the first to examine body representation of obese children from a multi-modal perspective and in both a cross sectional and a longitudinal setting. That way, we were not only able to compare body representation of obese children to normal weight children, but could also identify changes that occur in the course of weight loss. We observed that obese children do not have general problems to represent their excess body size. However, correlation analyses indicate that their self-categorization as “too large” is likely influencing their body representation on a basal level. Further studies focusing on the association between perception and representation of the body might help to better explain this observation.

For clinical practice, it is important that we observed counter-evidence for the idea that obese children lack awareness of their excess size or motivation to lose weight. Still, we observed that interoceptive awareness, as indicated by heartbeat detection accuracy, changes throughout weight loss therapy, suggesting that it might play a role in weight regulation. Further research is needed that tracks different types of body representation throughout development and long-term treatment of overweight. Specifically, the role of interoceptive awareness for weight loss treatment needs further exploration. Our findings also show that neither high body dissatisfaction nor accurate awareness of the own excess size translate into higher weight loss success. However, to improve psychosocial well-being of overweight children, weight loss interventions that specifically target body image may be useful.

Acknowledgments

We thank all staff involved at the Fachkliniken Wangen i. A. for their support, and all colleagues at the University Hospital Tübingen who helped with planning, implementation and evaluation of the study. Further, we thank Michael Geuss for proofreading. We are grateful to all the participating children and adolescents as well as to their parents for their collaboration and their interest in the study.

Author Contributions

  • Conceptualization: IM PE SZ DD HS.
  • Formal analysis: IM.
  • Funding acquisition: IM PE.
  • Investigation: HS.
  • Methodology: IM PE HS.
  • Project administration: HS IM.
  • Supervision: IM PE DD.
  • Visualization: IM SCM.
  • Writing – original draft: SCM IM.
  • Writing – review & editing: KEG DD SZ MT FJ PE HS.
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 3. Prochaska JO, Redding CA, Evers KE. The Transtheoretic Model and Stages of Change. In: Glanz K, Rimer BK, Viswanath K, editors. Health Behavior and Health Education Theory, Research, and Practice. 4 ed. San Francisco: Wiley; 2008.

Comparing Heights

human outline

For heights in centimeters, set Feet to 0 and Inches to the measurement in centimeters. Don't mix the two systems, you'll get a wrong result.

A page allowing the comparison of up to six figures is now available. Note that this is an XHTML+SVG page.

Comparing Heights : Comparing Heights Visually With Chart

There are a few different ways you can compare heights. Here are a few possibilities:

  • You can use a tape measure or ruler to physically measure the height of two people or objects and compare the measurements.
  • You can use your own body as a reference point. For example, you can stand next to someone and compare your height, or you can hold your arm out horizontally and compare the height of an object to your arm's length.
  • You can use a unit of measurements, such as inches or centimeters, to compare heights. For example, you could say that one person is 6 feet tall (72 inches) and another person is 5 feet, 6 inches tall (66 inches), so the first person is taller.
  • You can use visual comparison to estimate the relative heights of two people or objects. For example, you might say that one person is "about the same height as" or "a little taller than" another person, or that an object is "much taller than" or "slightly shorter than" another object.

It's important to note that height can be affected by a variety of factors, including genetics, diet, and exercise, so it's not always an accurate way to compare people or objects.

Why Use This Comparing Heights Chart?

A comparing heights chart is a tool that allows you to compare the heights of different people or objects. It can be useful for a variety of purposes. A comparing heights chart can be a useful tool for helping you understand and compare the sizes of different people

How to Use Comparing Heights Tool?

There are a few different types of comparing height tools that you can use, depending on your needs and the type of information you want to compare.

Physical comparing heights tools 

Online comparing heights tools , app-based comparing heights tools , how to use comparing height tool.

A Step-by-Step Guide

Height comparison tools are valuable resources that allow you to accurately measure and compare the heights of different objects, people. Whether you're a researcher, designer, or simply curious about size variations, these tools provide a convenient way to obtain precise measurements.

Height comparison tools provide a convenient and accurate means of comparing the heights of objects, people, or landmarks. By following this step-by-step guide, you can effectively use these tools to measure and analyze size variations. Whether for research, design, or personal curiosity, height comparison tools are valuable resources that can enhance your understanding of scale and proportions.

Why Use This Comparing Height Chart?

When it comes to understanding height differences and comparing the stature of individuals, objects, or landmarks, a comparing height chart proves to be an invaluable tool. This practical resource offers a range of benefits and applications across various fields.

Accurate Height Comparisons:

Visual Representation:

Size Proportions and Scaling:

Decision-Making and Planning:

Educational and Learning Purposes:

Research and Analysis:

Height Difference Chart

Height difference charts are valuable visual tools that help us compare and comprehend the variations in height between individuals, objects, or landmarks. By presenting height disparities in a clear and concise format, these charts enable us to gain a better understanding of size relationships. In this comprehensive guide, we will explore the features, applications, and benefits of height difference charts.

Why We Need a Size Comparison Tool

Introduction :

A size comparison tool is a valuable resource that helps us understand the dimensions, proportions, and relationships between various objects, organisms, or structures. Whether in personal, professional, or educational contexts, having access to a size comparison tool offers numerous benefits.

Q1: What is a size comparison tool?

A size comparison tool is a resource that helps users understand and compare the dimensions, proportions, and relationships between different objects, organisms, or structures. It provides visual representations and measurements to facilitate accurate size comparisons.

Q2: How does a size comparison tool work?

A size comparison tool works by presenting information in a visual format, such as side-by-side comparisons or scalable diagrams. Users input measurements or select objects from a database, and the tool generates visual representations that allow for quick comprehension of size disparities.

Q3: What are the benefits of using a size comparison tool?

Using a size comparison tool offers several benefits. It enhances the visual understanding, facilitates proportional assessment, aids in decision-making and planning, supports educational endeavors, assists in research and analysis, and helps visualize abstract concepts.

Q4: In what fields or industries are size comparison tools commonly used?

Size comparison tools find applications in various fields and industries. They are used in architecture, design, manufacturing, biology, astronomy, history, geology, mathematics, and many other disciplines where understanding size relationships is crucial.

Q5: Are size comparison tools only for professionals?

No, size comparison tools are not limited to professionals. They are accessible to anyone who wants to compare sizes accurately. They can be used by individuals for personal projects, by educators for teaching purposes, or by researchers for data analysis and exploration.

Q6: Can size comparison tools handle different units of measurement?

Yes, many size comparison tools are designed to accommodate different units of measurement, such as inches, centimeters, feet, or meters. Users can typically select their preferred unit of measurement within the tool's settings.

Q7: Can I customize the objects or measurements in a size comparison tool?

The level of customization varies depending on the specific size comparison tool. Some tools offer pre-defined objects or organisms for comparison, while others allow users to input custom measurements or import objects of their choice. It's important to explore the features and capabilities of each tool to determine its customization options.

Q8: Are size comparison tools available as mobile apps?

Yes, there are size comparison tools available as mobile apps, allowing users to access them conveniently on their smartphones or tablets. These apps often provide on-the-go size comparisons and may include additional features specific to mobile devices.

Q9: Are size comparison tools accurate?

Size comparison tools strive to provide accurate measurements and representations. However, accuracy may vary depending on factors such as data input, calibration, and the tool's algorithms. It's important to use reliable and reputable tools and ensure accurate measurements are entered for the most precise results.

Q10: Can I share or export the results from a size comparison tool?

Many size comparison tools offer options to save, share, or export the results. This allows users to capture visual representations or measurements for presentations, reports, or further analysis. Look for tools that provide these sharing or export functionalities.

How to make a height comparison chart?

Creating a height comparison chart involves a few steps. Here's a guide on how to make a height comparison chart:

Determine the Purpose and Scope:

Gather Height Data:

Choose a Chart Format:

Select a Tool or Software:

Enter Data and Create the Chart:

Customize the Chart:

Include Additional Information:

Review and Refine:

Save and Share:

Height Comparison Calculator: How to Use It?

A height comparison calculator is a tool that helps you determine and compare the height differences between two or more individuals.

Click the calculate or compare button to generate the results. The calculator will process the inputted data and display the height differences or the comparison visualization. The results may include numerical values, graphical representations, or both, depending on the calculator.

How do I become taller?

While it's important to note that height is largely determined by genetics and factors beyond our control, there are a few lifestyle practices you can adopt to maximize your potential height. Keep in mind that these methods may only have a limited impact, especially after puberty when the growth plates in the long bones have closed. Here are some tips that may help you optimize your height:

Good Nutrition:

Regular Exercise and Physical Activity:

Adequate Sleep:

Good Posture:

Avoid Smoking and Excessive Alcohol:

Wear Appropriate Footwear:

Confidence and Posture Awareness:

WHAT ARE THE BENEFITS OF A HEIGHT COMPARISON CHART

A height comparison chart offers several benefits, A height comparison chart provides a visual representation of height differences between individuals or objects. It allows for a quick and easy understanding of the relative heights, making it visually appealing and engaging.

Overall, a height comparison chart offers a concise and visually appealing way to present and understand height differences. It aids in communication, decision-making, research, and education, while also providing a record-keeping tool and a source of motivation.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Visual processing in anorexia nervosa and body dysmorphic disorder: similarities, differences, and future research directions

Sarah k. madsen.

a Imaging Genetics Center, Laboratory of Neuro Imaging, Department of Neurology, University of California, Los Angeles, School of Medicine, Los Angeles, CA, USA

b Department of Psychiatry and Behavioral Sciences, Stanford University, School of Medicine, Stanford, CA, USA

Jamie D. Feusner

c Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, School of Medicine, Los Angeles, CA, USA

Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are psychiatric disorders that involve distortion of the experience of one’s physical appearance. In AN, individuals believe that they are overweight, perceive their body as “fat,” and are preoccupied with maintaining a low body weight. In BDD, individuals are preoccupied with misperceived defects in physical appearance, most often of the face. Distorted visual perception may contribute to these cardinal symptoms, and may be a common underlying phenotype. This review surveys the current literature on visual processing in AN and BDD, addressing lower- to higher-order stages of visual information processing and perception. We focus on peer-reviewed studies of AN and BDD that address ophthalmologic abnormalities, basic neural processing of visual input, integration of visual input with other systems, neuropsychological tests of visual processing, and representations of whole percepts (such as images of faces, bodies, and other objects). The literature suggests a pattern in both groups of over-attention to detail, reduced processing of global features, and a tendency to focus on symptom-specific details in their own images (body parts in AN, facial features in BDD), with cognitive strategy at least partially mediating the abnormalities. Visuospatial abnormalities were also evident when viewing images of others and for non-appearance related stimuli. Unfortunately no study has directly compared AN and BDD, and most studies were not designed to disentangle disease-related emotional responses from lower-order visual processing. We make recommendations for future studies to improve the understanding of visual processing abnormalities in AN and BDD.

Introduction

Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are psychiatric disorders characterized by disturbances in the experience of one’s physical appearance. In AN, individuals are preoccupied with body weight and size, often resorting to caloric restriction to maintain a low body weight. They hold often-delusional convictions of being overweight, despite substantial evidence to the contrary. Additionally, they focus on specific body areas that they believe appear “fat,” such as the abdominal region, hips, and face. In BDD, individuals are preoccupied with misperceived defects in appearance ( Phillips et al., 2010 ). As a result, they believe that they look deformed or ugly, even though the perceived abnormalities are not noticeable to others or appear minor. They are often concerned with specific details, typically of the face or head (e.g. skin blemishes, hair texture, shape of nose), although any body part may be of concern. As in AN, they also are highly convinced of their perceptions, and 27–60% are classified as currently delusional ( Mancuso, Knoesen, & Castle, 2010 ; Phillips, Menard, Pagano, Fay, & Stout, 2006 ). Both disorders may manifest similar phenomenologic patterns involving hypervigilant attention to details of appearance, which are perceived as flawed, likely contributing to often-delusional distortions in perception.

AN and BDD are associated with substantial psychological distress and functional impairment. Underscoring the broad public health significance of these conditions, the lifetime risk of attempted suicide in BDD is 22–27.5% ( Phillips et al., 2005a ; Phillips & Diaz, 1997 ; Veale et al., 1996 ), and the risk of completed suicide is 30 times that of the general population ( Phillips & Menard, 2006 ). AN is associated with a mortality rate of 5–7% per decade, and an overall standardized mortality higher than any other psychiatric illness ( Sullivan, 1995 ).

In addition to similarities in phenomenology, AN and BDD share a peak onset during adolescence, high risk for chronicity, and have similar comorbidity patterns (although there are higher rates of generalized anxiety disorder in AN and higher rates of panic disorder in BDD) ( American Psychiatric Association., 2000 ; Phillips & Kaye, 2007 ; Phillips, Menard, Fay, & Weisberg, 2005b ; Swinbourne & Touyz, 2007 ). AN and BDD co-occur frequently; up to 32% of BDD patients also have a lifetime comorbid eating disorder ( Ruffolo, Phillips, Menard, Fay, & Weisberg, 2006 ) and 25–39% of those with AN are diagnosed with comorbid BDD ( Grant, Kim, & Eckert, 2002 ; Rabe-Jablonska Jolanta & Sobow Tomasz, 2000 ). There is also overlap in specific areas of appearance concerns, e.g. size of abdomen, hips, and thighs ( Grant & Phillips, 2004 ). Approximately 30% of individuals with BDD report significant weight concerns, a characteristic linked to greater symptom severity and morbidity ( Kittler, Menard, & Phillips, 2007 ). The few studies that have directly compared AN and BDD found similarities on clinical and psychological measures, with both groups exhibiting severe body image symptoms and low self-esteem compared to healthy controls ( Hrabosky et al., 2009 ; Kollei, Brunhoeber, Rauh, de Zwaan, & Martin, 2012 ; Rosen & Ramirez, 1998 ). There are also important differences, most notably that the gender distribution is less skewed toward females in BDD ( Buhlmann et al., 2010 ; Koran, Abujaoude, Large, & Serpe, 2008 ; Rief, Buhlmann, Wilhelm, Borkenhagen, & Brahler, 2006 ).

The similarities in clinical features suggest that AN and BDD may represent overlapping body image disorders ( Cororve & Gleaves, 2001 ). However, BDD is currently categorized as a somatoform disorder in DSM-IV-TR and as a form of hypochondriasis in ICD-10, while AN is categorized as an eating disorder in both systems ( American Psychiatric Association., 2000 ; World Health Organization., 1992 ). Moreover, BDD is often considered to be on the obsessive-compulsive disorder (OCD) spectrum, due to similar phenomenology, demographics, heredity, course of illness, and response to treatment ( Hollander & Wong, 1995 ; Phillips et al., 2007 ). (Of note, AN also has some features suggestive of overlap with OCD, including obsessive thoughts and ritualized eating behaviors, high comorbidity of OCD, and a high proportion of first degree relatives with OCD ( Phillips et al., 2007 ).)

Since distorted perception of appearance is a key feature of both AN and BDD, examining visual processing as a phenotype may provide a level of understanding about the relationship between these two disorders, and about the neurobiology behind this phenomenon, which is less likely to be captured by examining individual categorical diagnoses ( Insel & Cuthbert, 2009 ). This has important clinical relevance, as persistent perceptual disturbance is a strong predictor of relapse in AN ( Keel, Dorer, Franko, Jackson, & Herzog, 2005 ). There is a considerable need for understanding the neurobiology of perception in AN and BDD, including any similarities and differences, to help guide the development of rational treatments. To maintain focus on the phenotype of abnormal visual perception of appearance, we did not include other disorders in this review such as OCD or social anxiety disorder; these disorders may also be related to AN and BDD, although perhaps via different overlapping phenotypes (heightened self-consciousness, tendencies for obsessive thoughts and compulsive behaviors, etc.). We have not included other eating disorders, such as bulimia nervosa (BN), for several reasons. Among BN, AN, and BDD there is overlap of certain common clinical features (perceptual distortions, high trait perfectionism, and high comorbid anxiety) ( American Psychiatric Association., 2000 ; Phillips et al., 2005b ; Phillips et al., 2010 ; Sutandar-Pinnock, Blake Woodside, Carter, Olmsted, & Kaplan, 2003 ; Swinbourne et al., 2007 ). However, BN has additional characteristics that set it apart from AN and BDD with respect to perception and visual processing. For one, distorted body image perception is required for a diagnosis of AN or BDD, but not for BN ( American Psychiatric Association., 2000 ). While many individuals with BN do have body image disturbances ( Jansen, Nederkoorn, & Mulkens, 2005 ; Schneider, Frieler, Pfeiffer, Lehmkuhl, & Salbach-Andrae, 2009 ), this disorder is characterized by a preoccupation with shape and weight, along with body dissatisfaction (even if shape and weight are accurately perceived) ( Stice & Agras, 1998 ). Thus, BN is more heterogeneous when it comes to perceptual distortions, with less consistency than in AN and BDD. Another characteristic that sets BN apart from AN and BDD, in general, is that individuals with BN have higher rates of impulsivity ( Claes, Mitchell, & Vandereycken, 2012a ; Claes et al., 2012b ) and comorbidity with impulse control-related disorders ( Fernandez-Aranda et al., 2008 ). Finally, there is support for the conceptualization of AN and BDD, unlike BN, as including individuals with low insight or delusional beliefs ( Hartmann, Greenberg, & Wilhelm, 2013 ; Konstantakopoulos et al., 2012 ; Mancuso et al., 2010 ).

Conscious perception is a complex phenomenon that relies on multiple visual processing systems in the brain, along with tightly linked cognitive and emotional processes that contribute to the subjective perceptual experience ( Moutoussis, 2009 ; Zeki & Bartels, 1999 ). Visual information is exchanged through functional connections between lower- and higher-order visual areas (occipital, temporal, and parietal), and centers for emotion, cognition, and memory ( Lamme & Roelfsema, 2000 ). This facilitates both bottom-up, perceptually driven visual inputs to emotion and cognitive systems, and top-down modulation of visual input based on conscious interpretation ( Hanson, Hanson, Halchenko, Matsuka, & Zaimi, 2007 ; Iaria, Fox, Chen, Petrides, & Barton, 2008 ). An individual’s current psychological state and past experiences with emotionally charged visual stimuli (e.g. images of bodies and faces for AN and BDD) are ever-present confounds in studies assessing visual processing ( Rossignol et al., 2012 ; Schettino, Loeys, Bossi, & Pourtois, 2012 ). Pre-existing or symptom-dependent abnormalities in the function of lower-order visual systems, higher-order cognitive and emotional systems, or both, could be involved in abnormal perception. The majority of studies performed in AN and BDD thus far, unfortunately, have not been designed to discern top-down from bottom-up phenomena. This review focuses on studies that have addressed visual processing in individuals with AN or BDD. We define visual processing as phenomena involved in any of the following steps: acquisition of visual input in the peripheral sensory system (ophthalmologic), relay of this information to the central nervous system, neural processing of visual information in occipital and occipito-parietal regions (from basic feature characteristics to more complex aspects), and further elaboration and integration into representations of whole percepts (e.g. face or body images) in (primarily) temporal brain areas.

Our goal was to examine evidence for abnormalities of different aspects of visual processing in AN and BDD, from the function and structure of the eye to higher order processing of human face and body images. To focus the review, we excluded studies that lacked information on visual processing itself but may have otherwise investigated consciously or unconsciously held beliefs about appearance, emotional reactions to visual stimuli (including food stimuli), facial emotional recognition, and visual memory or attention. Another goal was to compare visual processing abnormalities between these related disorders; definitive conclusions, however, were limited because no study directly compared these two groups.

We organized these studies into: a) ophthalmologic findings; b) perceptual organization as assessed through neuropsychological tests of visuospatial, global/local processing, or multi-sensory integration that included visual stimuli; c) visual processing of naturalistic images (face or body images); and d) functional and structural brain imaging studies of visual processing. The latter category could provide information about visual processing at any of the aforementioned steps. In addition, we included studies that examined evidence for any abnormalities as either state (secondary to symptoms of the illness) or trait (pre-existing) characteristics in AN or BDD.

Materials and Methods

We searched for articles in ISI Web of Knowledge, PubMed, and PsychINFO databases. We used keywords of either “body dysmorphic disorder” or “anorexia nervosa” along with the following: “vision,” “eye tracking”, “eye movements”, “visual processing,” “visual perception,” “body processing,” “face processing,” “central coherence,” “global local processing,” “Navon,” and “Rey-Osterrieth Complex Figure Test.” We also used the keyword “visual” along with “body dysmorphic disorder,” but not with “anorexia nervosa” because the latter generated excessive unrelated results. We excluded articles that were not peer reviewed ( n =15), not written in English ( n =4), or did not provide data or clinical descriptions of visual processing of human bodies or faces in AN or BDD ( n =200). We did not include articles describing visual memory or attention alone because we felt it was impossible to disentangle elements of visual processing from top-down and bottom-up modulation from other cognitive domains. Some versions of the neuropsychological tests included in our search terms also involve a memory component, although modifications helped separate this confound in some studies. We also included relevant manuscripts that were cited by articles found through the literature search, but were not otherwise retrieved using our search terms. In addition, we performed a search on Google Scholar ( www.scholar.google.com ) using the same search terms to locate any relevant articles that the other search methods may have missed. We did not impose a limitation on publication date of the articles.

Forty-four journal articles for AN and 15 for BDD were included in this review, ranging in publication date from 1973 to 2012. Of these, two articles for AN and one for BDD were literature reviews, one article for AN was a meta-analysis, and three articles for AN were case reports. All studies of BDD included adults only. Most studies of AN included adults (26 total), with ten studies also including adolescents and five studies including only adolescents and children. Most BDD studies did not list illness duration; the three studies that included this information reported mean illness durations of approximately 10–20 years ( Kiri, 2012 ; Stangier, Adam-Schwebe, Muller, & Wolter, 2008 ; Yaryura-Tobias et al., 2002 ). Twenty-two of the AN studies listed illness duration, with means ranging from less than a year to over 20 years across studies and substantial variability within studies. Most AN studies included only females, with the exception of three studies that included one or two males ( Andres-Perpina et al., 2011 ; Castro-Fornieles et al., 2009 ; Slade & Russell, 1973 ) and one large study that included 7 men with AN ( Stedal, Rose, Frampton, Landro, & Lask, 2012 ). BDD studies were generally of mixed gender, with the exception of three studies that included only women ( Clerkin & Teachman, 2008 ; Deckersbach et al., 2000 ; Stangier et al., 2008 ). A minority of studies included populations that were medication-free (six BDD studies, seven AN studies), with most studies not listing medication status, or else including individuals taking antidepressants, anxiolytics, or antipsychotics. Most studies excluded individuals with neurologic conditions or substance abuse, yet allowed other psychiatric comorbidities – with depression, anxiety, and OCD being the most common.

II. Anorexia Nervosa

A. ophthalmologic findings.

Two published studies have investigated vision at the ophthalmologic level in AN. Both found decreased retinal nerve fiber layer thickness in patients compared to healthy controls ( Caire-Estevez, Pons-Vazquez, Gallego-Pinazo, Sanz-Solana, & Pinazo-Duran, 2012 ; Moschos et al., 2011 ). One study specifically found lower mean foveal thickness in AN ( Moschos et al., 2011 ). In the two studies, there were inconsistent differences in visual acuity and visual fields; Moschos et al. found no differences between AN patients and controls in visual acuity or visual field, whereas Caire-Estevez et al. found worse visual acuity and visual field sensitivity in the AN group. It is unclear whether these differences are secondary to malnourishment and weight loss, as each tested underweight patients. Further research in recovered patients would help understand whether these differences persist after nutrition is improved.

B. Neuropsychological tests of visuospatial and global/local processing

Studies evaluating visuospatial abilities in AN have focused primarily on the integration of sensory information and on cognitive style, the latter suggesting that “weak central coherence” is present in AN ( Lopez, Tchanturia, Stahl, & Treasure, 2008 ). Weak central coherence implies a lack of global and integrated processing and enhanced focus on detail ( Frith, 2003 ). Studies investigating central coherence often use the Rey-Osterrieth Complex Figures Task (RCFT) ( Shin, Park, Park, Seol, & Kwon, 2006 ) or a variation of the Embedded Figures Task ( Witkin, 1950 ).

The RCFT requires participants to draw a complex figure and can be scored on different aspects, including performance and strategy on copy, although the interpretation of visual processing is confounded by memory in both immediate and delayed recall conditions. Studies measuring the accuracy of copy have shown either equivalent ( Castro-Fornieles et al., 2009 ; Danner et al., 2012 ; Lopez et al., 2008 ; Sherman et al., 2006 ; Stedal et al., 2012 ) or poorer performance in AN relative to control groups ( Kim, Lim, & Treasure, 2011 ; Lopez, Tchanturia, Stahl, & Treasure, 2009 ). One study found that underweight participants with AN performed worse than controls on copy, but improved after gaining 10% of body weight ( Kingston, Szmukler, Andrewes, Tress, & Desmond, 1996 ), suggesting that the deficit may be weight-dependent. The majority of studies, which included adolescents and children along with adults, found significantly worse delayed recall in AN ( Andres-Perpina et al., 2011 ; Camacho, 2008 ; Favaro et al., 2012 ; Lopez et al., 2008 ; 2009 ; Mathias & Kent, 1998 ; Pendleton-Jones, 1991 ; Sherman et al., 2006 ; Stedal et al., 2012 ; Tenconi et al., 2010 ; Thompson, 1993 ), or a trend for worse delayed recall ( Castro-Fornieles et al., 2009 ), but some found equivalent performance ( Danner et al., 2012 ; Kim, 2011 ; Kingston et al., 1996 ; Murphy, Nutzinger, Paul, & Leplow, 2002 ).

A few of the studies that found poorer recall on the RCFT in individuals with AN compared to healthy controls also investigated potential mechanisms for this deficit. Three studies, one of which included children and adolescents ( Stedal et al., 2012 ), evaluated the order of construction and found that individuals with AN draw the detailed aspects of the figure first and show less continuity in their drawing ( Lopez et al., 2008 ; Sherman et al., 2006 ; Stedal et al., 2012 ). In two of these studies, copy organization significantly mediated the relationship between diagnostic group and recall accuracy, suggesting that individuals with AN may not encode information efficiently for accurate retrieval ( Lopez et al., 2008 ; Sherman et al., 2006 ).

Most of these studies involved underweight participants, which may explain poorer cognitive ability. However, one study tested a sample of females at risk for developing an eating disorder based on subclinical symptoms ( Alvarado-Sanchez, Silva-Gutierrez, & Salvador-Cruz, 2009 ). This group evidenced more fragmented completion of the figure, although overall accuracy was equivalent to a non-risk comparison group. Two studies of weight-restored participants with AN showed a lack of significant differences on accuracy compared to healthy controls on copy and recall ( Kingston et al., 1996 ; Pendleton-Jones, 1991 ), but did not evaluate strategy or style. A recent study found worse performance on the RCFT and lower central coherence in a cohort of underweight AN participants, but no significant difference between a separate cohort of recovered AN participants and healthy controls ( Favaro et al., 2012 ).

Other studies have examined central coherence with the Embedded Figures Task (EFT), in which participants locate a shape embedded in a complex figure, with shorter response times attributed to bias towards detailed processing. Studies have also used the similar Matching Familiar Figures (MFF) test, which asks participants to identify which one of eight figures matches one previously viewed. Both of these tasks require detailed searching of the test images and visual working memory to recall the previously viewed figure.

Studies utilizing the EFT have shown inconsistent results. Three studies, one including adolescents, found that individuals with AN identified the embedded figures more quickly and with higher accuracy than healthy controls when the embedded figure was available for reference during the task ( Lopez et al., 2008 ; 2009 ; Tokley & Kemps, 2007 ). However, Pendleton-Jones et al. (1991) found that longer time was required in both underweight and weight-restored AN adults relative to healthy controls, using the original version of the test, which required holding the figures in working memory, creating a confound between visual processing and memory. When adults and adolescents were administered a time-constrained EFT task, the AN group correctly located fewer shapes than controls ( Kim et al., 2011 ).

Two studies have tested individuals with AN using the MFF. One study in adults ( Toner, Garfinkel, & Garner, 1987 ) found superior accuracy in AN, suggesting a bias toward detail level processing. They also found faster response time; while this is generally interpreted as suggesting less bias toward detail processing, which requires greater time to perform, it could alternatively be suggestive of bias toward attention to detail along with abnormally high speed of detail processing. The other study in adolescents and adults found no difference in performance or response time compared to controls ( Southgate, Tchanturia, & Treasure, 2008 ).

Other studies have utilized novel methods to investigate visuospatial processing, as well as sensory integration, in individuals with AN. In a task requiring participants to compare two visual stimuli, AN participants performed as quickly and accurately as healthy controls, although they were slower when the comparison was lexical ( Eviatar, Latzer, & Vicksman, 2008 ). In order to evaluate body perception while minimizing the confounding emotional impact of body images, Nico et al. (2010) had participants follow a stimulus on a trajectory and estimate whether it would hit their body ( Nico et al., 2010 ). Participants with AN were worse at detecting their left body boundary, showing a tendency to underestimate it. Although this was incongruent with feelings of “fatness,” which is expected to expand the body boundary, it was similar to the performance of stroke participants with right parietal damage, who were also evaluated. Another study of visuospatial processing used a task of manually matching the angle of a moveable bar. Adolescents with AN performed worse than healthy controls when using their right hand ( Grunwald et al., 2002 ). Taken together, findings from these two studies ( Grunwald et al., 2002 ; Nico et al., 2010 ) suggest deficits both on the left and right side of the body. This could be due to problems with hemispheric integration, since integration of multisensory information occurs in the right posterior parietal cortex ( Grunwald et al., 2002 ). Individuals must perceive the visuospatial field before they can act on it, possibly explaining why one study found perceptual differences on the left side of the body ( Nico et al., 2010 ) and another found performance differences on the right ( Grunwald et al., 2002 ).

Guardia et al. (2012) also found evidence of visuospatial deficits in adolescents and adults. Participants with AN overestimated their body size (but not the body size of others) compared to healthy controls when asked to estimate if the body would be able to pass through a doorway ( Guardia et al., 2012 ).

Integration between sensory modalities is important for adjusting visual perception and correcting errors in perception. This integration can be tested with a size-weight illusion, where one must integrate tactile with visual information to estimate weight in two differently sized objects of the same weight. One such study showed that individuals with AN perform better than controls, suggesting a reduced reliance on visual information in judgment of weight ( Case, Wilson, & Ramachandran, 2012 ). A possible explanation is greater reliance on proprioceptive information, although the mechanism of enhanced performance is unclear.

C. Visual processing of naturalistic images

Studies in AN in adults and adolescents have found abnormalities in attention, and overestimation of body size, specific to images of their own bodies ( Garner, Garfinkel, Stancer, & Moldofsky, 1976 ; Slade et al., 1973 ; Smeets, Smit, Panhuysen, & Ingleby, 1997 ; Urgesi et al., 2012 ). An eye tracking study showed that AN participants focus visual attention on body parts they are dissatisfied with, whereas controls tend to scan the whole body image ( Freeman, 1991 ). Another study showed that AN participants saccade more quickly to their own picture compared to other pictures, unlike controls ( Blechert, Ansorge, & Tuschen-Caffier, 2010 ). These differences are consistent with symptoms of increased attention to body areas that evoke strong feelings of dissatisfaction. Without further investigations into visual processing, however, one cannot conclude if these findings are due to abnormalities in sensory-level visual information processing, cognitive and evaluative processes related to AN, or both.

Several studies indicate that adults and adolescents with AN overestimate their overall body size ( Garner et al., 1976 ; Slade et al., 1973 ; Smeets et al., 1997 ; Urgesi et al., 2012 ), even though processing of their own body parts ( Garner et al., 1976 ; Slade et al., 1973 ; Smeets et al., 1997 ; Urgesi et al., 2012 ) and of the body size of other women ( Garner et al., 1976 ; Slade et al., 1973 ; Smeets et al., 1997 ; Urgesi et al., 2012 ) does not differ or is more accurate than controls. These studies also found that AN participants correctly judge height, body movements ( Garner et al., 1976 ; Slade et al., 1973 ; Smeets et al., 1997 ; Urgesi et al., 2012 ), and objects ( Garner et al., 1976 ; Slade et al., 1973 ; Smeets et al., 1997 ; Urgesi et al., 2012 ). These findings suggest a specific disturbance of own body image. However, other studies have found perceptual abnormalities when viewing images of others’ bodies. AN adults are better at detecting “thinner than” differences in others’ bodies ( Smeets, Ingleby, Hoek, & Panhuysen, 1999 ) and are more accurate than controls in a delayed matching-to-sample task of pictures of male bodies, with no performance differences when matching pictures of body movements in adolescence ( Urgesi et al., 2012 ). In summary, there appears to be evidence in AN for disturbances in visual processing of both own and others’ bodies, although there may be slightly different patterns in each.

D. Brain imaging studies of visual perception

Several brain imaging studies presenting naturalistic images of bodies to participants with AN found abnormal brain activity. In a review, Pietrini et al. (2011) report relatively consistent findings of abnormal activity in frontal (anterior cingulate, and frontal visual system (right superior frontal ( Beato-Fernandez et al., 2009 ) and right dorsolateral prefrontal ( Wagner, Ruf, Braus, & Schmidt, 2003 ), parietal (inferior parietal lobule), and striatal (caudate) regions ( Pietrini et al., 2011 ). In one functional magnetic resonance imaging (fMRI) study, AN participants showed higher ventral striatal activity when viewing underweight images compared to normal weight bodies, and also preferred underweight images, unlike controls ( Fladung et al., 2010 ). When asked to compare their own body to an image of another body, AN participants showed less activation in the insula and premotor areas and more activation in the anterior cingulate compared to controls ( Friederich et al., 2010 ). This comparison of own vs. other body was associated with greater anxiety in AN participants, who, unsurprisingly, were less satisfied with their own body image. Another study contrasted own body images that had been altered to appear overweight vs. unaltered images. Left medial prefrontal cortex activation was reduced for the restrictive subtype of AN compared to healthy controls, while amygdala activation was normal for the combined AN group ( Miyake et al., 2010 ). A structural MRI study found reduced gray matter density in the left extrastriate body area, which is involved in processing images of human body parts, in individuals with AN compared to controls ( Suchan et al., 2010 ).

Taken together, the functional and structural brain imaging evidence suggests that AN participants demonstrate functional and structural abnormalities in brain areas that are involved in processing visual images of human bodies, as well as, systems involved in anxiety and emotion. Preexisting abnormalities in brain function/structure could predispose individuals to developing AN; however this is difficult to separate from the effects of low weight and poor nutrition, as these were predominantly studies of underweight individuals.

E. State vs. Trait

Some aspects of abnormal visual processing in AN may represent state characteristics (secondary to weight, nutrition, or other symptoms, and modifiable by treatment and recovery) while others represent traits (pre-existing and usually stable across time). Weight gain in underweight AN participants was associated with improved copy scores on tests of global vs. local processing ( Kingston et al., 1996 ) and reduced overestimation of own body width ( Slade et al., 1973 ).

Treatment and recovery from AN have also been associated with changes in functional brain activity. After recovery, AN participants showed normalized activation in the amygdala and fusiform gyrus for happy and fearful faces ( Cowdrey, Harmer, Park, & McCabe, 2012 ). After treatment with cognitive behavioral therapy that reduced negative body-related thoughts compared to a waitlist group, participants showed increased brain activation when viewing pictures of their own bodies compared to pre-treatment, whereas the non-treatment group actually showed a decrease in activity ( Vocks et al., 2011 ). The increase in activation after treatment was seen in areas that process human body images (extrastriate body area ( Downing, Jiang, Shuman, & Kanwisher, 2001 ), left middle temporal gyrus ( Weiner & Grill-Spector, 2011 )) and self-awareness (bilateral middle frontal gyrus ( Platek, Wathne, Tierney, & Thomson, 2008 )).

On the other hand, certain abnormalities in global-local processing may be stable traits of individuals predisposed toward AN, as they have been found in studies of recovered AN participants, unaffected relatives, and at-risk populations with sub-clinical symptoms. (Of note, certain pathophysiological processes evident in recovered AN individuals may represent “scars,” or lasting effects of the underweight state or other aspects of the illness that persist after weight has been restored.) Superior attention to detail and poor central coherence compared to controls was observed in both active and recovered AN participants and their unaffected sisters, for adults and adolescents ( Roberts, Tchanturia, & Treasure, 2012 ; Tenconi et al., 2010 ). A strong correlation between altered central coherence and deficits in set shifting also persisted in recovered AN participants ( Danner et al., 2012 ). Females at risk for developing AN, with sub-clinical symptoms, also demonstrate worse organization (fragmented completion of figure) on the RCFT ( Alvarado-Sanchez et al., 2009 ). Overall, there is some disagreement on whether and how global-local processing changes with treatment in AN. There is no available literature on the stability of other aspects of visual processing in AN.

II. Body Dysmorphic Disorder

There are no published studies yet on ophthalmologic abnormalities in BDD.

Several studies have investigated visuospatial processing in BDD. One study found that individuals with BDD, similar to those with OCD, performed normally on visuospatial construction and memory on the RCFT ( Hanes, 1998 ). A subsequent neuropsychological study, on the other hand, found that the BDD group performed worse than controls on the RCFT ( Deckersbach et al., 2000 ). In this study, group differences in free recall were mediated by deficits in organizational strategies, in which the BDD group selectively recalled details instead of larger organizational design features. The authors suggested that abnormalities in executive functioning might have explained these results. However, earlier perceptual abnormalities in global and local visual processing, or differences in selective attention, may have also contributed, since this task involved viewing and encoding a complex visual figure.

Dunai et al., (2010) administered a battery of executive functioning tests of planning, organization, working memory, and motor speed to participants with and without BDD ( Dunai, Labuschagne, Castle, Kyrios, & Rossell, 2010 ). They found several domains of executive functioning were impaired in BDD, including difficulty manipulating visual information held in working memory on the Spatial Working Memory Task.

Early experimental evidence that BDD may involve aberrant own-face perception comes from a study in which BDD participants and healthy controls viewed an image of their own face and indicated if any alterations had been made. A higher proportion of the BDD group perceived distortions of their faces, when in fact none were made ( Yaryura-Tobias et al., 2002 ). Another study investigated asymmetry detection in individuals with BDD ( Reese, McNally, & Wilhelm, 2010 ). Participants viewed others` faces that were unaltered or altered in symmetry, and also viewed arrays of dots that were symmetric or asymmetric. Individuals with BDD did not differ significantly from controls in accuracy of detecting asymmetry with faces or dot arrays, although they were slower in making decisions about symmetry. In another study, BDD participants were more accurate than controls at detecting changes made to facial features (e.g, distance between the eyes) of photos of others’ faces ( Stangier et al., 2008 ).

Another investigation of face processing demonstrated that individuals with BDD were slower and less accurate than controls in matching the identity of an emotional face to the same face with a neutral expression ( Feusner, Bystritsky, Hellemann, & Bookheimer, 2010a ). This was evident regardless of the type of emotional expression. This suggests general abnormalities in visual processing of faces, which may be more pronounced when features are in a different configuration, such as occurs with emotional expressions.

The “face inversion effect” is a phenomenon in which recognition of inverted (upside down) faces is less accurate and slower relative to recognition of upright faces, due to the absence of a holistic template for inverted faces ( Farah, Tanaka, & Drain, 1995 ). In a study using this task, the BDD group demonstrated a smaller inversion effect during the longer duration stimuli, but no differences were seen for shorter duration stimuli ( Feusner et al., 2010b ). This suggests that BDD individuals may have an imbalance in global vs. local processing, with a tendency to engage in highly detailed processing of faces, whether upright or inverted. This is in contrast to controls, who may primarily engage holistic processing for upright faces, yet rely on detailed processing for inverted faces ( Freire, Lee, & Symons, 2000 ). This may explain the BDD group’s advantage in speed for inverted faces, yet only when stimuli were presented long enough to engage detail processing. Another study used inverted faces to test face recognition in individuals with BDD and healthy controls ( Kiri, 2012 ). Although the study did not test the “inversion effect” per se, they found that individuals with BDD relative to controls had enhanced ability to recognize inverted famous faces, but did not demonstrate significant differences for upright famous faces.

Clerkin and Teachman (2008) tested visual processing of images of own faces morphed with those of highly attractive or unattractive others, in individuals with either high or low BDD symptoms ( Clerkin et al., 2008 ). The low BDD symptom group demonstrated a normative self-enhancement bias (tendency to rate more attractive morphed image as representing themselves), which was not evident in the high BDD symptom group. This resulted in a non-significant trend for interaction between morphed photograph type and group.

A study using eye tracking investigated selective visual attention in BDD, social phobia, and healthy controls ( Grocholewski, Kliem, & Heinrichs, 2012 ). Only the BDD group showed selective attention to their own areas of perceived defects of their faces, as measured by number of fixations per degree of visual angle.

Results from these psychophysical experiments suggest that imbalances in holistic vs. detailed processing may explain performance advantages in individuals with BDD relative for inverted faces ( Feusner et al., 2010b ), as well as for change detection in facial features of others’ faces ( Stangier et al., 2008 ), and may be an inefficient or inaccurate strategy for identity recognition across facial expressions ( Feusner et al., 2010a ). In addition, heightened vigilance to details, particularly for areas of perceived defects ( Grocholewski et al., 2012 ), may also increase susceptibility to errors of commission. This could result in “false positive” errors when scrutinizing own-face images ( Yaryura-Tobias et al., 2002 ).

The first fMRI study to investigate the neural correlates of visual perception in BDD used others’ faces as stimuli ( Feusner, Townsend, Bystritsky, & Bookheimer, 2007 ). BDD participants and healthy controls were scanned with fMRI while matching photographs of others’ faces with normal, high or low spatial frequencies (creating images that contained primarily high detail or configural/holistic information, respectively). The BDD group demonstrated left hemisphere hyperactivity relative to controls in an extended face-processing network for normal and low spatial frequency images. Within-groups results suggested that healthy controls only engaged the left hemisphere for high spatial frequency (high detail) images, whereas BDD participants engaged the left hemisphere for all image types.

An fMRI study using own-face stimuli in BDD participants and healthy controls found abnormal hypoactivity in the BDD group in striate and extrastriate visual cortex for low spatial frequency images, and hyperactivity in orbitofrontal cortex and caudate for normal images ( Feusner et al., 2010c ). BDD symptom severity correlated with orbitofrontal-striatal and extrastriate visual cortex activity. In a secondary data analysis of the same experiment, anxiety scores in BDD were regressed against fMRI signal changes in brain areas implicated in anxiety and visual processing of details ( Bohon, Hembacher, Moller, Moody, & Feusner, 2012 ). Intermediate anxiety scores were associated with higher levels of brain activity than high or low scores in ventral visual processing areas. Interestingly, the relationship between anxiety and activity in ventral visual processing systems held regardless of BDD symptom severity.

Another fMRI experiment used inanimate object stimuli to investigate general abnormalities in visual processing in BDD ( Feusner, Hembacher, Moller, & Moody, 2011 ). BDD participants and healthy controls matched photographs of houses that included normal, high or low spatial frequencies. The BDD group demonstrated abnormal hypoactivity in secondary visual processing systems for low spatial frequency images.

These functional neuroimaging studies provide evidence of abnormal visual processing in BDD, although they utilized relatively small sample sizes. The studies found abnormalities in primary and/or secondary visual cortical, temporal, and prefrontal systems and suggest imbalances in detailed vs. global/configural processing. Moreover, this overall pattern is evident not only for own and other’s appearance-related stimuli, but also for inanimate objects, suggesting more general aberrancies in visual processing.

There are currently no published studies on visual processing abnormalities in BDD as being either state or trait features.

Summary of findings

Overall, the literature on AN and BDD suggests a pattern of abnormalities in visual processing and perceptual organization that includes over-attention to detail and reduced processing of larger global features. In both AN and BDD, cognitive strategy and attention may at least partially mediate abnormalities, as these groups tend to focus more on symptom-specific details (body parts in AN and facial features in BDD), and misperceive aspects of their own images. However, visuospatial abnormalities are also evident in both disorders for non-appearance related stimuli. In brain imaging studies, both disorders show abnormal brain activation in frontal, parietal, striatal, and visual systems. Since no study has yet to directly compare visual processing in AN and BDD, we consider the following conclusions separately for each group.

In AN, there is evidence of over-attention to detail and reduced processing of larger holistic features, which likely contribute to lower accuracy on visuospatial tasks. AN individuals tend to overestimate their own body size in images. There is also evidence of abnormal reward circuit and limbic system activity for specific, salient body images. Integration of information between the left and right hemispheres in the brain may also be impaired.

Several studies have found that patterns of visual processing in those with AN may depend on body weight. However, possible effects of weight on visual processing of bodies may be intertwined with the severity of the disorder or degree of recovery. For example, at lower body weight several domains of AN symptoms may be more severe, and ability to gain weight or successfully maintain a normal weight may be linked to improvement in different symptom (e.g. cognitive rigidity or anxiety related to weight gain). Thus, it can be difficult to determine whether differences in visual processing after weight gain are related to changes in weight or nutrition, or to improvement in other symptoms.

In BDD, we also see evidence for increased attention to detail, reduced global processing, and poorer performance in visuospatial tasks. In this group, spatial working memory has been found to be impaired, which may contribute to these effects. Individuals with BDD may employ brain systems normally reserved for detailed image processing and underutilize brain regions responsible for configural and holistic processing. They also identify non-existent distortions in their own face images and show abnormal sensitivity to detecting change in others’ faces, possibly due to heightened vigilance to details. Abnormal performance in tests of visual perception and brain activation patterns in BDD are present for own face images, other face images, and inanimate objects.

However, findings in both disorders at this point should still be considered inconclusive, as there are some discrepancies in results across studies. Some of the discrepancies may be explained by differences in the study populations, which could have affected the measurements being analyzed. All BDD and some AN studies included only adults, with several AN studies also investigating adolescents. For example, age differences may explain discrepancies in studies using the MFF; studies of AN adults showed higher accuracy and detailed processing, while studies of adolescents found no difference between AN and controls. All BDD studies were of adults, so no clear conclusions about age in this group can be made. Differences in illness duration across studies may explain discrepancies in results. For example, studies including individuals with AN with longer durations of illness demonstrated abnormally slow performance. This may be due to an accumulation of damage to the brain as a result of longer standing malnutrition, anxiety, depression, etc. In general, duration of illness was either not listed or spanned such a broad range (months to decades in some cases), that no clear conclusions can be drawn. Comparisons between AN and BDD should also be made in light of the fact that BDD studies included both genders, while AN studies included only women (with one exception ( Stedal et al., 2012 )). Thus AN findings may be more specific to women, while BDD findings may be more generalizable to males and females. Two additional factors that could affect results across studies are comorbidities with other psychiatric disorders (some studies included individuals with AN or BDD who had comorbid anxiety, depression, and OCD, along with other disorders), and current use of psychoactive medications. It is also important to note that not all studies provided detailed descriptions of their study populations, making it difficult to assess these factors thoroughly.

Also limiting conclusions is the fact that in general there is an insufficient body of research on visual processing in AN and BDD. In particular, many aspects of visual processing have not been investigated in either disorder (e.g. sensory-level striate and extrastriate visual cortical functioning).

Recommendations for future studies

The following recommendations are meant to address limitations in the current literature, and to expand our understanding of pathological processes that cross diagnostic boundaries. It is difficult to make conclusions about visual processing abnormalities in AN and BDD because few studies have adequately disentangled abnormalities in visual processing from effects due to factors that influence visual perception, particularly emotionally salient stimuli. These modifying variables may include anxiety (both general, disease-related, and task-provoked anxiety), depression, personality traits (such as perfectionism and cognitive rigidity), specific symptom severity (obsessive thoughts, or fear of being fat), insight/delusionality, or weight in AN (current weight, or weight gain longitudinally, under-weight vs. weight-restored). While some of these factors may be modeled as covariates, others could be manipulated experimentally to provide more power and sensitivity to detect subtle effects on visual processing. For example, emotionally neutral figures and objects can be presented in various degrees of complexity, contrast, and spatial frequency. Emotional or physical experiences can also be manipulated in experiments to understand their effects on visual processing. Future studies should include stimuli that both do and do not elicit a disease-related emotional response to separate the influence of emotion on visual processing.

Abnormalities in visuospatial performance on neuropsychological tasks in AN and BDD could be mediated by abnormal cognitive strategies for encoding or retrieval of visual information in working memory. Some studies used modified versions of these tasks to reduce confounds with memory. It would also be worthwhile for future studies to specifically investigate visual memory and attention abnormalities in AN and BDD, using studies designed to disentangle these cognitive functions from basic visual processing abnormalities. This could be done, for example, with eye-tracking studies using basic visual as opposed to symptom-relevant stimuli, or studies assessing the difference in response of individuals asked to consciously shift their attention to different stimulus features. As an example of the former, Pallanti et al. (1998) found that the severity of abnormalities in smooth eye movements and saccadic performance in AN to moving, low level visual stimuli was correlated with OCD symptoms, perfectionism, drive for thinness, and interoceptive awareness (they did not, however, measure perceptual distortions in their participants) ( Pallanti, Quercioli, Zaccara, Ramacciotti, & Arnetoli, 1998 ). Widely used and well-validated, low level, neutral visual stimuli such as sine-wave gratings ( Campbell & Green, 1965 ) or the contour integration task ( Kovacs, Polat, Pennefather, Chandna, & Norcia, 2000 ) could be employed to assess relatively early visual system functioning. Moreover, brain imaging techniques such as electroencephalography (EEG) or magnetoencephalography (MEG) have the temporal resolution to discriminate early vs. later visual processing abnormalities, which is not possible with fMRI.

Conclusions about similarities and differences between AN and BDD are limited by the fact that these groups have not yet been directly compared on the same experimental tasks. To more definitively compare and contrast visual processing in these related disorders of body image, future studies should include both AN and BDD subjects, along with matched controls, within the same experiment to ensure methodological consistency. In addition, studies in large, analog populations that are assessed for dimensionality of body image disturbances and visual perceptual distortions may uncover dimensional abnormalities in visual processing that map to dysfunctional brain networks. Another line of research yet to be performed is an investigation of if, and how, prior trauma may relate to the development of aberrant visual processing.

Whether visual abnormalities are a contributing cause or a consequence of AN and BDD is still unclear. We attempted to address this question in the State vs. Trait section of our Results; however no papers were available on this topic for BDD. For AN, several studies suggest that heightened attention to detail and poor central coherence are stable traits that could have contributed to the development of the disorder. However, this inference is based on individuals in recovery or at-risk for AN, rather than currently unaffected individuals who later went on to develop AN. This type of powerful longitudinal study has not been done in AN or in BDD due to practical barriers, although it would be extremely valuable.

Replication of the current studies with larger sample sizes is also needed to understand which measures (neuropsychological assessments, behavioral measures, MRI measures etc.) are most reliable and informative for assessing visual processing in these populations. Multi-site studies are likely necessary to obtain large enough samples, due to difficulty in recruitment of these populations. Finally, it will be important to perform longitudinal studies to determine whether these putative phenotypes predict course of illness and response to treatment.

Acknowledgments

We thank Dr. M. Strober for his comments on the manuscript.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

  • Alvarado-Sanchez N, Silva-Gutierrez C, Salvador-Cruz J. Visoconstructive deficits and risk of developing eating disorders. The Spanish journal of psychology. 2009; 12 :677–685. [ PubMed ] [ Google Scholar ]
  • American Psychiatric Association. Diagnostic and statistical manual of mental disorders : DSM-IV-TR. 4th ed. Washington, DC: American Psychiatric Association; 2000. [ Google Scholar ]
  • Andres-Perpina S, Lozano-Serra E, Puig O, Lera-Miguel S, Lazaro L, Castro-Fornieles J. Clinical and biological correlates of adolescent anorexia nervosa with impaired cognitive profile. European child & adolescent psychiatry. 2011; 20 :541–549. [ PubMed ] [ Google Scholar ]
  • Beato-Fernandez L, Rodriguez-Cano T, Garcia-Vilches I, Garcia-Vicente A, Poblete-Garcia V, Castrejon AS, Toro J. Changes in regional cerebral blood flow after body image exposure in eating disorders. Psychiatry Research-Neuroimaging. 2009; 171 :129–137. [ PubMed ] [ Google Scholar ]
  • Blechert J, Ansorge U, Tuschen-Caffier B. A body-related dot-probe task reveals distinct attentional patterns for bulimia nervosa and anorexia nervosa. Journal of abnormal psychology. 2010; 119 :575–585. [ PubMed ] [ Google Scholar ]
  • Bohon C, Hembacher E, Moller H, Moody TD, Feusner JD. Nonlinear relationships between anxiety and visual processing of own and others' faces in body dysmorphic disorder. Psychiatry research. 2012; 204 :132–139. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Buhlmann U, Glaesmer H, Mewes R, Fama JM, Wilhelm S, Brahler E, Rief W. Updates on the prevalence of body dysmorphic disorder: a population-based survey. Psychiatry research. 2010; 178 :171–175. [ PubMed ] [ Google Scholar ]
  • Caire-Estevez P, Pons-Vazquez S, Gallego-Pinazo R, Sanz-Solana P, Pinazo-Duran MD. Restrictive anorexia nervosa: a silent enemy for the eyes and vision. The British journal of ophthalmology. 2012; 96 :145. [ PubMed ] [ Google Scholar ]
  • Camacho R. Neuropsychological evaluation in patients with eating disorders. Salud Mental. 2008; 31 :441–446. [ Google Scholar ]
  • Campbell FW, Green DG. Optical and retinal factors affecting visual resolution. The Journal of physiology. 1965; 181 :576–593. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Case LK, Wilson RC, Ramachandran VS. Diminished size-weight illusion in anorexia nervosa: evidence for visuo-proprioceptive integration deficit. Experimental brain research. Experimentelle Hirnforschung. Experimentation cerebrale. 2012; 217 :79–87. [ PubMed ] [ Google Scholar ]
  • Castro-Fornieles J, Bargallo N, Lazaro L, Andres S, Falcon C, Plana MT, Junque C. A cross-sectional and follow-up voxel-based morphometric MRI study in adolescent anorexia nervosa. Journal of Psychiatric Research. 2009; 43 :331–340. [ PubMed ] [ Google Scholar ]
  • Claes L, Mitchell JE, Vandereycken W. Out of control? Inhibition processes in eating disorders from a personality and cognitive perspective. The International journal of eating disorders. 2012a; 45 :407–414. [ PubMed ] [ Google Scholar ]
  • Claes L, Muller A, Norre J, Van Assche L, Wonderlich S, Mitchell JE. The relationship among compulsive buying, compulsive internet use and temperament in a sample of female patients with eating disorders. European eating disorders review : the journal of the Eating Disorders Association. 2012b; 20 :126–131. [ PubMed ] [ Google Scholar ]
  • Clerkin EM, Teachman BA. Perceptual and cognitive biases in individuals with body dysmorphic disorder symptoms. Cognition & Emotion. 2008; 22 :1327–1339. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cororve MB, Gleaves DH. Body dysmorphic disorder: a review of conceptualizations, assessment, and treatment strategies. Clin Psychol Rev. 2001; 21 :949–970. [ PubMed ] [ Google Scholar ]
  • Cowdrey FA, Harmer CJ, Park RJ, McCabe C. Neural responses to emotional faces in women recovered from anorexia nervosa. Psychiatry research. 2012; 201 :190–195. [ PubMed ] [ Google Scholar ]
  • Danner UN, Sanders N, Smeets PA, van Meer F, Adan RA, Hoek HW, van Elburg AA. Neuropsychological weaknesses in anorexia nervosa: set-shifting, central coherence, and decision making in currently ill and recovered women. The International journal of eating disorders. 2012; 45 :685–694. [ PubMed ] [ Google Scholar ]
  • Deckersbach T, Savage C, Phillips K, Wilhelm S, Buhlmann U, Rauch S, Baer L, Jenike M. Characteristics of memory dysfunction in body dysmorphic disorder. Journal of the International Neuropsychological Society. 2000; 6 :673–681. [ PubMed ] [ Google Scholar ]
  • Downing PE, Jiang Y, Shuman M, Kanwisher N. A cortical area selective for visual processing of the human body. Science. 2001; 293 :2470–2473. [ PubMed ] [ Google Scholar ]
  • Dunai J, Labuschagne I, Castle DJ, Kyrios M, Rossell SL. Executive function in body dysmorphic disorder. Psychol Med. 2010; 40 :1541–1548. [ PubMed ] [ Google Scholar ]
  • Eviatar Z, Latzer Y, Vicksman P. Anomalous lateral dominance patterns in women with eating disorders: clues to neurobiological bases. The International journal of neuroscience. 2008; 118 :1425–1442. [ PubMed ] [ Google Scholar ]
  • Farah MJ, Tanaka JW, Drain HM. What causes the face inversion effect? J Exp Psychol Hum Percept Perform. 1995; 21 :628–634. [ PubMed ] [ Google Scholar ]
  • Favaro A, Santonastaso P, Manara R, Bosello R, Bommarito G, Tenconi E, Di Salle F. Disruption of Visuospatial and Somatosensory Functional Connectivity in Anorexia Nervosa. Biological Psychiatry. 2012 [ PubMed ] [ Google Scholar ]
  • Fernandez-Aranda F, Pinheiro AP, Thornton LM, Berrettini WH, Crow S, Fichter MM, Halmi KA, Kaplan AS, Keel P, Mitchell J, Rotondo A, Strober M, Woodside DB, Kaye WH, Bulik CM. Impulse control disorders in women with eating disorders. Psychiatry research. 2008; 157 :147–157. [ PubMed ] [ Google Scholar ]
  • Feusner JD, Bystritsky A, Hellemann G, Bookheimer S. Impaired identity recognition of faces with emotional expressions in body dysmorphic disorder. Psychiatry research. 2010a; 179 :318–323. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Feusner JD, Hembacher E, Moller H, Moody TD. Abnormalities of object visual processing in body dysmorphic disorder. Psychological Medicine. 2011:1–13. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Feusner JD, Moller H, Altstein L, Sugar C, Bookheimer S, Yoon J, Hembacher E. Inverted face processing in body dysmorphic disorder. J Psychiatr Res. 2010b; 44 :1088–1094. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Feusner JD, Moody T, Townsend J, McKinley M, Hembacher E, Moller H, Bookheimer S. Abnormalities of visual processing and frontostriatal systems in body dysmorphic disorder. Archives of General Psychiatry. 2010c; 67 :197–205. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Feusner JD, Townsend J, Bystritsky A, Bookheimer S. Visual information processing of faces in body dysmorphic disorder. Archives of General Psychiatry. 2007; 64 :1417–1425. [ PubMed ] [ Google Scholar ]
  • Fladung AK, Gron G, Grammer K, Herrnberger B, Schilly E, Grasteit S, Wolf RC, Walter H, von Wietersheim J. A neural signature of anorexia nervosa in the ventral striatal reward system. The American journal of psychiatry. 2010; 167 :206–212. [ PubMed ] [ Google Scholar ]
  • Freeman R. In the eye of the beholder: Processing body shape information in anorexic and bulimic patients. International Journal of Eating Disorders. 1991; 10 :709–714. [ Google Scholar ]
  • Freire A, Lee K, Symons LA. The face-inversion effect as a deficit in the encoding of configural information: direct evidence. Perception. 2000; 29 :159–170. [ PubMed ] [ Google Scholar ]
  • Friederich HC, Brooks S, Uher R, Campbell IC, Giampietro V, Brammer M, Williams SC, Herzog W, Treasure J. Neural correlates of body dissatisfaction in anorexia nervosa. Neuropsychologia. 2010; 48 :2878–2885. [ PubMed ] [ Google Scholar ]
  • Frith U. Autism: Explaining the enigma. Oxford, UK: Blackwell Publishing; 2003. [ Google Scholar ]
  • Garner DM, Garfinkel PE, Stancer HC, Moldofsky H. Body image disturbances in anorexia nervosa and obesity. Psychosomatic medicine. 1976; 38 :327–336. [ PubMed ] [ Google Scholar ]
  • Grant JE, Kim SW, Eckert ED. Body dysmorphic disorder in patients with anorexia nervosa: prevalence, clinical features, and delusionality of body image. Int J Eat Disord. 2002; 32 :291–300. [ PubMed ] [ Google Scholar ]
  • Grant JE, Phillips KA. Is anorexia nervosa a subtype of body dysmorphic disorder? Probably not, but read on. Harv Rev Psychiatry. 2004; 12 :123–126. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Grocholewski A, Kliem S, Heinrichs N. Selective attention to imagined facial ugliness is specific to body dysmorphic disorder. Body Image. 2012; 9 :261–269. [ PubMed ] [ Google Scholar ]
  • Grunwald M, Ettrich C, Busse F, Assmann B, Dahne A, Gertz HJ. Angle paradigm: a new method to measure right parietal dysfunctions in anorexia nervosa. Archives of clinical neuropsychology : the official journal of the National Academy of Neuropsychologists. 2002; 17 :485–496. [ PubMed ] [ Google Scholar ]
  • Guardia D, Conversy L, Jardri R, Lafargue G, Thomas P, Dodin V, Cottencin O, Luyat M. Imagining one's own and someone else's body actions: dissociation in anorexia nervosa. PloS one. 2012; 7 :e43241. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hanes K. Neuropsychological performance in body dysmorphic disorder. Journal of the International Neuropsychological Society. 1998; 4 :167–171. [ PubMed ] [ Google Scholar ]
  • Hanson SJ, Hanson C, Halchenko Y, Matsuka T, Zaimi A. Bottom-up and top-down brain functional connectivity underlying comprehension of everyday visual action. Brain structure & function. 2007; 212 :231–244. [ PubMed ] [ Google Scholar ]
  • Hartmann AS, Greenberg JL, Wilhelm S. The relationship between anorexia nervosa and body dysmorphic disorder. Clin Psychol Rev. 2013; 33 :675–685. [ PubMed ] [ Google Scholar ]
  • Hollander E, Wong C. Introduction: obsessive-compulsive spectrum disorders. Journal of Clinical Psychiatry. 1995; 56 :3–6. [ PubMed ] [ Google Scholar ]
  • Hrabosky JI, Cash TF, Veale D, Neziroglu F, Soll EA, Garner DM, Strachan-Kinser M, Bakke B, Clauss LJ, Phillips KA. Multidimensional body image comparisons among patients with eating disorders, body dysmorphic disorder, and clinical controls: a multisite study. Body Image. 2009; 6 :155–163. [ PubMed ] [ Google Scholar ]
  • Iaria G, Fox CJ, Chen JK, Petrides M, Barton JJ. Detection of unexpected events during spatial navigation in humans: bottom-up attentional system and neural mechanisms. The European journal of neuroscience. 2008; 27 :1017–1025. [ PubMed ] [ Google Scholar ]
  • Insel TR, Cuthbert BN. Endophenotypes: bridging genomic complexity and disorder heterogeneity. Biological Psychiatry. 2009; 66 :988–989. [ PubMed ] [ Google Scholar ]
  • Jansen A, Nederkoorn C, Mulkens S. Selective visual attention for ugly and beautiful body parts in eating disorders. Behaviour research and therapy. 2005; 43 :183–196. [ PubMed ] [ Google Scholar ]
  • Keel PK, Dorer DJ, Franko DL, Jackson SC, Herzog DB. Postremission predictors of relapse in women with eating disorders. Am J Psychiatry. 2005; 162 :2263–2268. [ PubMed ] [ Google Scholar ]
  • Kim YR. Different Patterns of Emotional Eating and Visuospatial Deficits Whereas Shared Risk Factors Related with Social Support between Anorexia Nervosa and Bulimia Nervosa. Psychiatry investigation. 2011; 8 :9–14. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kim YR, Lim SJ, Treasure J. Different Patterns of Emotional Eating and Visuospatial Deficits Whereas Shared Risk Factors Related with Social Support between Anorexia Nervosa and Bulimia Nervosa. Psychiatry investigation. 2011; 8 :9–14. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kingston K, Szmukler G, Andrewes D, Tress B, Desmond P. Neuropsychological and structural brain changes in anorexia nervosa before and after refeeding. Psychological Medicine. 1996; 26 :15–28. [ PubMed ] [ Google Scholar ]
  • Kiri J. Superior face recognition in Body Dysmorphic Disorder. Journal of Obsessive-Compulsive and Related Disorders. 2012; 1 :175–179. [ Google Scholar ]
  • Kittler JE, Menard W, Phillips KA. Weight concerns in individuals with body dysmorphic disorder. Eat Behav. 2007; 8 :115–120. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kollei I, Brunhoeber S, Rauh E, de Zwaan M, Martin A. Body image, emotions and thought control strategies in body dysmorphic disorder compared to eating disorders and healthy controls. Journal of Psychosomatic Research. 2012; 72 :321–327. [ PubMed ] [ Google Scholar ]
  • Konstantakopoulos G, Varsou E, Dikeos D, Ioannidi N, Gonidakis F, Papadimitriou G, Oulis P. Delusionality of body image beliefs in eating disorders. Psychiatry research. 2012; 200 :482–488. [ PubMed ] [ Google Scholar ]
  • Koran LM, Abujaoude E, Large MD, Serpe RT. The prevalence of body dysmorphic disorder in the United States adult population. CNS Spectrums. 2008; 13 :316–322. [ PubMed ] [ Google Scholar ]
  • Kovacs I, Polat U, Pennefather PM, Chandna A, Norcia AM. A new test of contour integration deficits in patients with a history of disrupted binocular experience during visual development. Vision research. 2000; 40 :1775–1783. [ PubMed ] [ Google Scholar ]
  • Lamme VA, Roelfsema PR. The distinct modes of vision offered by feedforward and recurrent processing. Trends in neurosciences. 2000; 23 :571–579. [ PubMed ] [ Google Scholar ]
  • Lopez C, Tchanturia K, Stahl D, Treasure J. Central coherence in eating disorders: a systematic review. Psychological Medicine. 2008; 38 :1393–1404. [ PubMed ] [ Google Scholar ]
  • Lopez C, Tchanturia K, Stahl D, Treasure J. Weak central coherence in eating disorders: a step towards looking for an endophenotype of eating disorders. Journal of clinical and experimental neuropsychology. 2009; 31 :117–125. [ PubMed ] [ Google Scholar ]
  • Mancuso SG, Knoesen NP, Castle DJ. Delusional versus nondelusional body dysmorphic disorder. Comprehensive Psychiatry. 2010; 51 :177–182. [ PubMed ] [ Google Scholar ]
  • Mathias JL, Kent PS. Neuropsychological consequences of extreme weight loss and dietary restriction in patients with anorexia nervosa. Journal of clinical and experimental neuropsychology. 1998; 20 :548–564. [ PubMed ] [ Google Scholar ]
  • Miyake Y, Okamoto Y, Onoda K, Kurosaki M, Shirao N, Yamawaki S. Brain activation during the perception of distorted body images in eating disorders. Psychiatry research. 2010; 181 :183–192. [ PubMed ] [ Google Scholar ]
  • Moschos MM, Gonidakis F, Varsou E, Markopoulos I, Rouvas A, Ladas I, Papadimitriou GN. Anatomical and functional impairment of the retina and optic nerve in patients with anorexia nervosa without vision loss. The British journal of ophthalmology. 2011; 95 :1128–1133. [ PubMed ] [ Google Scholar ]
  • Moutoussis K. Brain activation and the locus of visual awareness. Communicative & integrative biology. 2009; 2 :265–267. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Murphy R, Nutzinger DO, Paul T, Leplow B. Dissociated conditional-associative learning in anorexia nervosa. Journal of clinical and experimental neuropsychology. 2002; 24 :176–186. [ PubMed ] [ Google Scholar ]
  • Nico D, Daprati E, Nighoghossian N, Carrier E, Duhamel JR, Sirigu A. The role of the right parietal lobe in anorexia nervosa. Psychological Medicine. 2010; 40 :1531–1539. [ PubMed ] [ Google Scholar ]
  • Pallanti S, Quercioli L, Zaccara G, Ramacciotti AB, Arnetoli G. Eye movement abnormalities in anorexia nervosa. Psychiatry research. 1998; 78 :59–70. [ PubMed ] [ Google Scholar ]
  • Pendleton-Jones B. Cognition in eating disorders. Journal of clinical and experimental neuropsychology. 1991; 13 :711–728. [ PubMed ] [ Google Scholar ]
  • Phillips KA, Coles ME, Menard W, Yen S, Fay C, Weisberg RB. Suicidal ideation and suicide attempts in body dysmorphic disorder. Journal of Clinical Psychiatry. 2005a; 66 :717–725. [ PubMed ] [ Google Scholar ]
  • Phillips KA, Diaz SF. Gender differences in body dysmorphic disorder. Journal of Nervous and Mental Disorders. 1997; 185 :570–577. [ PubMed ] [ Google Scholar ]
  • Phillips KA, Kaye WH. The relationship of body dysmorphic disorder and eating disorders to obsessive-compulsive disorder. CNS Spectr. 2007; 12 :347–358. [ PubMed ] [ Google Scholar ]
  • Phillips KA, Menard W. Suicidality in body dysmorphic disorder: a prospective study. Am J Psychiatry. 2006; 163 :1280–1282. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Phillips KA, Menard W, Fay C, Weisberg R. Demographic characteristics, phenomenology, comorbidity, and family history in 200 individuals with body dysmorphic disorder. Psychosomatics. 2005b; 46 :317–325. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Phillips KA, Menard W, Pagano ME, Fay C, Stout RL. Delusional versus nondelusional body dysmorphic disorder: clinical features and course of illness. Journal of Psychiatric Research. 2006; 40 :95–104. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Phillips KA, Wilhelm S, Koran LM, Didie ER, Fallon BA, Feusner J, Stein DJ. Body dysmorphic disorder: some key issues for DSM-V. Depress Anxiety. 2010; 27 :573–591. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pietrini F, Castellini G, Ricca V, Polito C, Pupi A, Faravelli C. Functional neuroimaging in anorexia nervosa: a clinical approach. European psychiatry : the journal of the Association of European Psychiatrists. 2011; 26 :176–182. [ PubMed ] [ Google Scholar ]
  • Platek SM, Wathne K, Tierney NG, Thomson JW. Neural correlates of self-face recognition: an effect-location meta-analysis. Brain research. 2008; 1232 :173–184. [ PubMed ] [ Google Scholar ]
  • Rabe-Jablonska Jolanta J, Sobow Tomasz M. The links between body dysmorphic disorder and eating disorders. Eur Psychiatry. 2000; 15 :302–305. [ PubMed ] [ Google Scholar ]
  • Reese HE, McNally RJ, Wilhelm S. Facial asymmetry detection in patients with body dysmorphic disorder. Behav Res Ther. 2010; 48 :936–940. [ PubMed ] [ Google Scholar ]
  • Rief W, Buhlmann U, Wilhelm S, Borkenhagen A, Brahler E. The prevalence of body dysmorphic disorder: a population-based survey. Psychological Medicine. 2006; 36 :877–885. [ PubMed ] [ Google Scholar ]
  • Roberts ME, Tchanturia K, Treasure JL. Is attention to detail a similarly strong candidate endophenotype for anorexia nervosa and bulimia nervosa? The world journal of biological psychiatry : the official journal of the World Federation of Societies of Biological Psychiatry. 2012 [ PubMed ] [ Google Scholar ]
  • Rosen JC, Ramirez E. A comparison of eating disorders and body dysmorphic disorder on body image and psychological adjustment. J Psychosom Res. 1998; 44 :441–449. [ PubMed ] [ Google Scholar ]
  • Rossignol M, Campanella S, Maurage P, Heeren A, Falbo L, Philippot P. Enhanced perceptual responses during visual processing of facial stimuli in young socially anxious individuals. Neuroscience letters. 2012; 526 :68–73. [ PubMed ] [ Google Scholar ]
  • Ruffolo J, Phillips K, Menard W, Fay C, Weisberg R. Comorbidity of body dysmorphic disorder and eating disorders: severity of psychopathology and body image disturbance. International Journal of Eating Disorders. 2006; 39 :11–19. [ PubMed ] [ Google Scholar ]
  • Schettino A, Loeys T, Bossi M, Pourtois G. Valence-specific modulation in the accumulation of perceptual evidence prior to visual scene recognition. PloS one. 2012; 7 :e38064. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Schneider N, Frieler K, Pfeiffer E, Lehmkuhl U, Salbach-Andrae H. Comparison of body size estimation in adolescents with different types of eating disorders. European eating disorders review : the journal of the Eating Disorders Association. 2009; 17 :468–475. [ PubMed ] [ Google Scholar ]
  • Sherman BJ, Savage CR, Eddy KT, Blais MA, Deckersbach T, Jackson SC, Franko DL, Rauch SL, Herzog DB. Strategic memory in adults with anorexia nervosa: are there similarities to obsessive compulsive spectrum disorders? The International journal of eating disorders. 2006; 39 :468–476. [ PubMed ] [ Google Scholar ]
  • Shin MS, Park SY, Park SR, Seol SH, Kwon JS. Clinical and empirical applications of the Rey-Osterrieth Complex Figure Test. Nature protocols. 2006; 1 :892–899. [ PubMed ] [ Google Scholar ]
  • Slade PD, Russell GF. Awareness of body dimensions in anorexia nervosa: cross-sectional and longitudinal studies. Psychological Medicine. 1973; 3 :188–199. [ PubMed ] [ Google Scholar ]
  • Smeets MA, Ingleby JD, Hoek HW, Panhuysen GE. Body size perception in anorexia nervosa: a signal detection approach. Journal of Psychosomatic Research. 1999; 46 :465–477. [ PubMed ] [ Google Scholar ]
  • Smeets MA, Smit F, Panhuysen GE, Ingleby JD. The influence of methodological differences on the outcome of body size estimation studies in anorexia nervosa. The British journal of clinical psychology / the British Psychological Society. 1997; 36 (Pt 2):263–277. [ PubMed ] [ Google Scholar ]
  • Southgate L, Tchanturia K, Treasure J. Information processing bias in anorexia nervosa. Psychiatry research. 2008; 160 :221–227. [ PubMed ] [ Google Scholar ]
  • Stangier U, Adam-Schwebe S, Muller T, Wolter M. Discrimination of facial appearance stimuli in body dysmorphic disorder. Journal of abnormal psychology. 2008; 117 :435–443. [ PubMed ] [ Google Scholar ]
  • Stedal K, Rose M, Frampton I, Landro NI, Lask B. The neuropsychological profile of children, adolescents, and young adults with anorexia nervosa. Archives of clinical neuropsychology : the official journal of the National Academy of Neuropsychologists. 2012; 27 :329–337. [ PubMed ] [ Google Scholar ]
  • Stice E, Agras WS. Predicting onset and cessation of bulimic behaviors during adolescence: A longitudinal grouping analysis. Behavior Therapy. 1998; 29 :257–276. [ Google Scholar ]
  • Suchan B, Busch M, Schulte D, Gronemeyer D, Herpertz S, Vocks S. Reduction of gray matter density in the extrastriate body area in women with anorexia nervosa. Behavioural brain research. 2010; 206 :63–67. [ PubMed ] [ Google Scholar ]
  • Sullivan PF. Mortality in anorexia nervosa. Am J Psychiatry. 1995; 152 :1073–1074. [ PubMed ] [ Google Scholar ]
  • Sutandar-Pinnock K, Blake Woodside D, Carter JC, Olmsted MP, Kaplan AS. Perfectionism in anorexia nervosa: a 6–24-month follow-up study. The International journal of eating disorders. 2003; 33 :225–229. [ PubMed ] [ Google Scholar ]
  • Swinbourne JM, Touyz SW. The co-morbidity of eating disorders and anxiety disorders: a review. Eur Eat Disord Rev. 2007; 15 :253–274. [ PubMed ] [ Google Scholar ]
  • Tenconi E, Santonastaso P, Degortes D, Bosello R, Titton F, Mapelli D, Favaro A. Set-shifting abilities, central coherence, and handedness in anorexia nervosa patients, their unaffected siblings and healthy controls: exploring putative endophenotypes. The world journal of biological psychiatry : the official journal of the World Federation of Societies of Biological Psychiatry. 2010; 11 :813–823. [ PubMed ] [ Google Scholar ]
  • Thompson SBN. Implications of Neuropsychological Test Results of Women in a New Phase of Anorexia Nervosa. European Eating Disorders Review. 1993; 1 :152–165. [ Google Scholar ]
  • Tokley M, Kemps E. Preoccupation with detail contributes to poor abstraction in women with anorexia nervosa. Journal of clinical and experimental neuropsychology. 2007; 29 :734–741. [ PubMed ] [ Google Scholar ]
  • Toner BB, Garfinkel PE, Garner DM. Cognitive style of patients with bulimic and diet-restricting anorexia nervosa. The American journal of psychiatry. 1987; 144 :510–512. [ PubMed ] [ Google Scholar ]
  • Urgesi C, Fornasari L, Perini L, Canalaz F, Cremaschi S, Faleschini L, Balestrieri M, Fabbro F, Aglioti SM, Brambilla P. Visual body perception in anorexia nervosa. The International journal of eating disorders. 2012; 45 :501–511. [ PubMed ] [ Google Scholar ]
  • Veale D, Boocock A, Gournay K, Dryden W, Shah F, Willson R, Walburn J. Body dysmorphic disorder. A survey of fifty cases. British Journal of Psychiatry. 1996; 169 :196–201. [ PubMed ] [ Google Scholar ]
  • Vocks S, Schulte D, Busch M, Gronemeyer D, Herpertz S, Suchan B. Changes in neuronal correlates of body image processing by means of cognitive-behavioural body image therapy for eating disorders: a randomized controlled fMRI study. Psychological Medicine. 2011; 41 :1651–1663. [ PubMed ] [ Google Scholar ]
  • Wagner A, Ruf M, Braus DF, Schmidt MH. Neuronal activity changes and body image distortion in anorexia nervosa. Neuroreport. 2003; 14 :2193–2197. [ PubMed ] [ Google Scholar ]
  • Weiner KS, Grill-Spector K. Not one extrastriate body area: using anatomical landmarks, hMT+, and visual field maps to parcellate limb-selective activations in human lateral occipitotemporal cortex. NeuroImage. 2011; 56 :2183–2199. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Witkin HA. Individual differences in ease of perception of embedded figures. Journal of personality. 1950; 19 :1–15. [ PubMed ] [ Google Scholar ]
  • World Health Organization. The ICD-10 classification of mental and behavioural disorders: clinical descriptions and diagnostic guidelines. Geneva, Switzerland: WHO; 1992. [ Google Scholar ]
  • Yaryura-Tobias J, Neziroglu F, Chang R, Lee S, Pinto A, Donohue L. Computerized perceptual analysis of patients with body dysmorphic disorder. CNS Spectrums. 2002; 7 :444–446. [ PubMed ] [ Google Scholar ]
  • Zeki S, Bartels A. Toward a theory of visual consciousness. Consciousness and cognition. 1999; 8 :225–259. [ PubMed ] [ Google Scholar ]

IMAGES

  1. Weight Loss Comparison Infographic Template

    visual representation weight loss comparison to objects

  2. Weight Loss Comparison Stickers Diet Friendly Visual Aid

    visual representation weight loss comparison to objects

  3. Weight Loss Comparison Stickers Diet Friendly Visual Aid

    visual representation weight loss comparison to objects

  4. How Much Does Stuff Weigh? A comparison of what your weight loss

    visual representation weight loss comparison to objects

  5. Weight Loss Comparison Stickers Diet Friendly Visual Aid

    visual representation weight loss comparison to objects

  6. Fat person weight loss comparison drawing Vector Image

    visual representation weight loss comparison to objects

VIDEO

  1. Solar System Size Comparison

  2. FIND OUT UR VISUAL WEIGHT

  3. A visual representation of what mixed signals feel like 💗🦈 #newmusic #indiemusic #singersongwriter

  4. Planets Size Comparison

  5. Eugenia Cooney

  6. size comparison of space objects

COMMENTS

  1. VisualBMI

    VisualBMI shows you what weight looks like on a human body. Using a large index of photos of men and women, you can get sense of what people look like at different weights or even the same weight. If you find this website useful, please share it or send a note. Credits. Huge thanks to all those who share their stories and photos on reddit.

  2. Visual Representation of Weight Loss

    This Is How Much Weight People Have Lost — You Have to See This Visual! When you throw around a number, like "I've lost 20 pounds," or "I've dropped 50 pounds," it's hard to visualize exactly ...

  3. 3D Body Visualizer

    3D Body Visualizer. We know that no amount of reassurance can ever be enough for your body, weight, or height. This is why we brought you a solution enabling you to see yourself and judge your looks without overthinking or self-doubting. Whether you are a man or a woman, our tool can help you set personal goals, compare progress, and help you ...

  4. What Things Weigh: Measure Your Progress With Real-World Items

    40 pounds equals a five-gallon bottle of water or an average human leg. 44 pounds equals an elephant's heart. 50 pounds equals a small bale of hay. 55 pounds equals a 5,000-BTU air conditioner. 66 pounds equals the fats and oils an average American eats in a year. 70 pounds equals an Irish Setter.

  5. Food Serving Sizes: A Visual Guide

    Here are some general guidelines for the number of daily servings from each food group*: Grains and starchy vegetables: 6-11 servings a day. Nonstarchy vegetables: 3-5 servings a day. Dairy: 2-4 servings a day. Lean meats and meat substitutes: 4-6 ounces a day or 4-6 one-ounce servings a day. Fruit: 2-3 servings a day.

  6. BMI Visualizer

    Your BMI: Body Mass Index Scale. Underweight: less than 18.5. Normal weight: 18.5 - 25. Overweight: 25 - 30. Obesity: greater than 30. Body Mass Index (BMI) is calculated using your height and weight and is approximately related to body fat percentage. Calculate your BMI and visualize your 3D body model WebGL.

  7. 7 Fun Ways to Visually Track Weight Loss Progress

    Charmed, I'm Sure. SparkPeople member TWEETYX2 wears a charm bracelet every day as a visual aid to remind herself, minute by minute, of her weight loss goals. Each charm represents a different source of motivation, such as a fork and spoon to remember to track her food intake, an apple as a symbol of healthy snacks, a crescent moon to remind ...

  8. Compare Your Weight Loss Results to Objects

    Weight loss comparison to objects. Let's look at how different amounts of weight loss translate into various household items. 2lb - A large bag of sugar. 2lb (1kg) of body fat takes up around 1000 cubic centimeters or just over 4 cups in volume. That's quite a lot! Losing this amount of fat is the equivalent of a losing a large bag of ...

  9. Weight Scales vs. 3D Body Scanning for Measuring Success

    Leverage Body Measurements. In comparison to weight scales, 3D body scanning technology is a much more effective way of tracking fat loss and gaining muscle. When taken by a professional in a gym setting, this process creates an accurate representation of one's body weight and composition. In fact, the results from this assessment can be used ...

  10. What Does Your Weight Loss Compare To?

    Read the weight loss comparison list below and share with OH members what your weight loss is equal to! 1 pound = a loaf of bread. 2 pounds = an IPad or the Imperial State Crown. 3 pounds = a can of Crisco vegetable oil. 4 pounds = a Reebok 4-Pound Ankle Wrap Weight set. 5 pounds = the world's largest gummy bear (Five-pound gummy bear is equal ...

  11. Body Image Disturbances and Weight Bias After Obesity Surgery: Semantic

    The adjustment task was performed for the 16 adjectives and then for the visual representation of the patient's or control's own body. The order of the bodies/adjectives was randomized ... after obesity surgery and weight loss, ... Chen T. Efficient feature extraction for 2D/3D objects in mesh representation. IEEE Int Conf Image Process ...

  12. Weight Loss Comparison to Objects

    Generally, weight loss follows this pattern: In 1 to 2 weeks - you'll begin to feel better. In 4 to 6 weeks - others will notice your weight loss. In 8 to 12 weeks - you'll notice weight loss yourself. If you're struggling to stay motivated, it can be beneficial to think of your weight loss objectively. This is where everyday ...

  13. Multimodal Body Representation of Obese Children and Adolescents before

    Perception Scores of body perception in obese children (OBE) before weight loss (T1) in comparison to normal-weight children (NW). The perception scores of everyday objects, body width, body depth, body circumference, tactile size estimation and heartbeat detection are displayed as box-whiskers with a cross, the latter depicting the mean.

  14. Frequent Self-Weighing and Visual Feedback for Weight Loss in

    This group's weight loss over year 2 was on average 1.9 ± 5.7 kg ( n = 57). This loss was significantly different from zero ( p = 0.013) but was not significantly different from the average loss of the CTM intervention group in year 1 (2.6 ± 5.9 kg; n = 70; p = 0.524).

  15. A Study on Female Consumers' Perceptions of the Health Value of Visual

    The obesity epidemic has evolved into a significant problem globally and poses a serious threat to the health of society. Despite increasing international research on obesity food packaging, the health value of visual elements in weight loss supplement packaging varies according to regional cultures and consumer groups. The scope of this study is the health perceptions of obese urban Chinese ...

  16. Limits to visual representational correspondence between ...

    Recent hierarchical convolutional neural networks (CNNs) have achieved human-like object categorization performance 1,2,3,4.It has additionally been shown that representations formed in lower and ...

  17. Universal Loss Reweighting to Balance Lesion Size Inequality in 3D

    But it gives us only visual representation of experimental results. To compare the performance of different methods we extract a single value from the curves. Authors of suggested using the average object-wise Recall over the predefined FP values (1/8, 1/4, 1/2, 1, 2, 4, 8) which is also the main metric of LUNA16 challenge . This metric gives ...

  18. Multimodal Body Representation of Obese Children and Adolescents ...

    Objective The aim of the study was to investigate whether obese children and adolescents have a disturbed body representation as compared to normal-weight participants matched for age and gender and whether their body representation changes in the course of an inpatient weight-reduction program. Methods Sixty obese (OBE) and 27 normal-weight (NW) children and adolescents (age: 9-17) were ...

  19. Comparing Heights

    A comparing height chart presents height comparisons in a visual format, making it easier to comprehend and interpret the information at a glance. With clear visual representations such as bar graphs, line charts, or side-by-side comparisons, you can quickly grasp the relative heights between objects or individuals.

  20. Visual processing in anorexia nervosa and body dysmorphic disorder

    This integration can be tested with a size-weight illusion, where one must integrate tactile with visual information to estimate weight in two differently sized objects of the same weight. One such study showed that individuals with AN perform better than controls, suggesting a reduced reliance on visual information in judgment of weight ( Case ...

  21. PDF Hyperbolic Contrastive Learning for Visual Representations beyond Objects

    6. Conclusion. We present HCL, a contrastive learning framework that learns visual representation for both objects and scenes in the same representation space. The major novelty of our method is a hyperbolic contrastive objective built on an object-centric scene hierarchy.

  22. Representation of Object Weight in Human Ventral Visual Cortex

    In two studies, the same participants prepared and then executed lifting actions with objects of varying weight. In the first study, we show that when lifting visually identical objects, where predicted weight is based solely on sensorimotor memory, weight is represented in object-sensitive OTC. In the second study, we show that when object ...

  23. Representation of Object Weight in Human Ventral Visual Cortex

    The Journal of Neuroscience. 2010. TLDR. This work investigates how changes in corticospinal excitability (CSE) and grip force scaling depend on the presence of visual cues and the weight of previously lifted objects and demonstrates that these CSE changes are used by the motor system to scale grip force. Expand.