Flaws of Object Oriented Modeling

At the beginning of the computer era the system designers came from the world of hardware and it is noticeable. In hardware there are many working elements that can operate in parallel and most times at different rates of operation. This requires hi degree of accuracy in system timing. Chip designers count the number of transistors between two elements to make sure that the Operation Flow is maintained.

The Assembly language defines a set of primitive / native operations. The programming process and software design using Assembly is in direct correspondence with the execution flow. If you want the flow to break then you use the Jump operation explicitly. This is because Assembly was originally designed by hardware developers and was written to accommodate the hardware.

The C language is a procedural language. It was originally created by Assembly programmers (as 'B'). We can still see Assembly type of thinking operations built in the language, for example:
++ is INC
-- is DEC
[A] ? [B] : [C] is equal to:
[A] ;// do operation [A]
JZ ;// if true go to [B]
[C] ;// else do [C]
JMP ;// go to end
[B]
When you are used to working with Assembly you get used to thinking in "test" – "do this if so" – "do this if not". C programmers hardly ever use this.
The C language is a procedural language. It helps us group together sets of operations and also releases us from the need to use Jumps or Go To-s, which can be very simple to track execution flow if you use it correctly but can be easily abused into what was coined as "Spaghetti Code" because of all the lines you need to draw when you try to track the execution flow of an application that was designed incorrectly.

Next evolution, came the language of C++ which is Object Oriented in design. This allows the separation of code modules into discrete software units called a class. Object Oriented programming allows multiple teams of developers to work on the same project very easily. Object Oriented languages can really help the developer manage the code.

The problems that came with Object Oriented programming is that these languages are really designed to help the developer manage the code…
Now it is almost impossible to follow the execution flow. When you want to know what happened in your application when the error message popped up you have to go over all the objects and all the classes that the execution flow went through and many times execution flow travels between several member functions before leaving to another object. Following the execution flow today is almost impossible.

Many times I have seen 'Pure' Object Oriented design producing a code that is a collection of many three-line functions that call each other. Every button click in the application travels through ten or more small functions. You can no longer follow the execution flow by reading the code. This brings two major problems that we face today.

The first problem is that it is no longer possible to detect execution flow bugs with a simple code review. Going over an Assembly code it is very easy to detect simple bugs such as down-casting, potential overflows etc. Reading an object oriented code you can't see the big picture and it is often impossible to review all the small functions that call the one function that you modified.

The second problem with this model is the "Not my fault" syndrome. "I only called a member of another object and it returned FALSE. Don’t ask me why". This is how you get an error that says "Problem with saving the document. Reason is '0x8000745 – unknown', What would you like to do?" What do YOU think I should do?! The programmer got this return value from some object that he is not familiar with and has no idea what this value means so he just pushes that to the higher level. The last level that you can propagate to is the user, and so my mother keeps facing these interesting decisions when she is trying to save a picture.

Object Oriented Modeling was invented to help developers manage the code but had no regard for execution flow. Up till now we used to use step-by-step debugging to see the execution flow. This is no longer relevant when we plan on having multiple threads going over our code. You single step one thread and another completes 5000 loops in the background. If you have multiple threads going over the same function they might all stop on the same breakpoint and you have no way of telling which is which effectively.

Following execution flow today is a terrible problem.

This is the first in a collection of articles that will introduce a new model called the Operation View Model with motivation for using it. This model can define any element in the computer world and it is the next step in software evolution.
Next article will demonstrate how operating systems follow an evolution pattern that is similar to the ones described here.

For more complete information about compiler optimizations, see our Optimization Notice.

28 comments

Top
anonymous's picture

I agree that OOP is flawed in the since that it is over complicated. but the reasons you have outlined don't make allot of since. I have written in assembly before and started in C programming. The idea that small functions and many classes are the route of the problem does not make since to me.
If its a function that manages a large process then it needs to call several smaller functions to achieve that. If you push all of the code into the same function then you cant see whats going on thru all of the details. I don't want my function that executes a database query to contain all the details of how to connect to the specific DB, Execute the query and extract the data. I want that to be abstract.
However, there are serious flaws in the OO approach. I have spent years mastering the approach to find that I can produce quality solutions in it that no one else can follow because hardly any programmers I work with understand OO. Most developers use just what they need to get by.
This just continues to get worse as now I have delved into the world of Enterprise design patterns to find out that they help me in keeping the solution straight but other developers are puzzled by the solutions.

anonymous's picture

Orienting to an object in reality is harder than you might think: to represent merely the shift from one vector of oriention to another in 3D space, mathematicians depart normal numbers for quaternions. Its game programmers who now wrestle with this complexity, and their frustrations work up the line to designers. And math programming shakes up the shop: watch Wolfram Alpha, and do have a look at Cabri and Cinderella.

Am I just being theoretical and a math snob? My concern is that even if you don't have quaternion-level flexibility to back out of jams, the electrons you are trying to order around the chip do, and *will use it* if it gets them to a lower-energy state, which typically leaves you at the console with less initiative to intervene.

I think you need default objects to grip the chip before taking options to steer it. This is max hard with (a) disks that rotate while the chip doesn't (the database devil); (b) USB devices that stack like supercomputer components; and (c) the DSP chip, which does that Fourier stuff underlying the uncertainty principle! Here one is advised to distinguish kinematic (monitoring) and dynamic (intervening) aspects of the model. It would help to have generic escape to the monitoring frame for jam resolution.

anonymous's picture

The real problem that I have with OOP is the ARROGANCE that it has created in the software field. The smartest person in the world is of no use if they can't communicate and be reasonable. There is no reasoning with many OO programmers - they are like creationists (adam and eve living with the dinosaurs). OO is good for managing complex implementations, yes. But the problem is a lot of OO programmers don't TRY to simplify. OO is NOT simplifying - its just hiding. There is a big difference between simplification and hiding.

anonymous's picture

I agree with Ralf and Christian Posta. And yes he really works at Intel! I am a software developer who has worked at Intel in the past. Most of the people doing programming are hardware people and that's what you would expect from them. Because while writing code to test a chip one need to know the exact execution path and all minor details in order to track defects in chip design. But that only means you are using the wrong language for your task. In developing software (and not code at the hardware level), if you still find the need to track the exact execution flow in order to find issues with the code, you are a naive programmer and using the wrong approach. And obscure error messages in windows or its application, does not reflect upon OOPs. They are just bad programming habits programmers have acquired over the time. Take the best quality tool in the market and hand it over to two people - one can create a master piece and the other a mess! And these obscure error messages are not a result of complications in tracking control flow in OOPs but exist also in C code.

anonymous's picture

Object Oriented Programming: Its a bit like communism, a nice idea but it doesn't work?

anonymous's picture

Asaf Shelly is really describing problems (i.e. following the control flow), that are mainly caused by asynchronous processes (that are also evident in any event-driven programming scenario).

He uses this problem as a stick to criticize OOP.

OOP can be severely criticized without recourse to following the control flow - although it certainly doesn't help. Hidden or obscure processes help no-one.

In my opinion, OOP creates many more problems than it supposedly solves, causes unnecessary overheads and certainly bloats code.

I personally use "control tables" to structure programs (and have done so successfully for 40+ years). The logic of even a complex program can be easily built into a decision table that is designed to be "executed" quite efficiently. At a glance, complex relationships can be viewed effortlessly and changed easily (without the need to change the interpreter in most cases).

It is my experience that most proponents of object oriented programming don't really understand their subject deeply enough to even explain how its nuts and bolts work. It is as if they worked on some other plain of existence to that of the real world and leave it to compiler writers to sort out the mess.

Assembler programmers (or high level language programmers who are aware of how the machine functions) are much more savvy than OOP programmers who strangelly think in terms of creating "methods" to make Lassie (who is a dog) save Timmie (who is a child). To use the same metaphor - I think OOP programmers have been "sold a pup" that has perhaps is deaf and blind and has Hip dysplasia (canine).

Let us inform the Kennel club of the dubious breeding practices and - please can we have our money back?

Ken

anonymous's picture

Asaf,

Thanks for your post! It gives an excellent path to some vibrant discussions.

I would like to focus on the title of your post, and how I don't believe the "flaw of object oriented modeling" is a flaw at all but rather two different approaches to programming with each one best suited for different contexts.

You say, "Object Oriented Modeling was invented to help developers manage the code", but this is not entirely the case.

Modeling real-life systems has been around for many hundreds of years, though surely longer. Modeling can be found throughout many disciplines including engineering, architecture, mathematics, et. al. As mentioned above in previous comments, which I fully agree, modeling is an approach to problem solving that helps manage complexity in a given domain. Trying to implement a model in software can become difficult using languages that are not well-suited for modeling. Object-oriented languages are a better fit for models because of their state+behavior (object) approach. Procedural languages are not as good because they're focused more on completing certain predefined tasks and not on capturing the concepts in the model. Object-oriented modeling is the implementation of a model with an object oriented language and can reap the great benefits of modeling. To say it was invented to "help developers manage the code" is entirely simplistic and misses the point of modeling in the first place.

On the other hand, you're somewhat correct when you say modeling in software reduces the ability to follow an execution path. The reason is simply because modeling is focused on concepts and behavior, and not a predefined, task-oriented, series of steps. Therefore, the reason you put forth for a "flaw in object modeling" is no flaw at all. It's a completely separate approach for solving a problem. If the problem has to do with mathematics and physics, no doubt you would model it. If the problem has to do with a complex business domain, a model will help manage all of the complexity found in such a domain. If your problem is writing device drivers or embedded systems software, a procedural approach probably would be more appropriate.

Can you write device drivers using a modeling approach with an object-oriented language? Probably. But it might be overkill, muddy the tasks taken to perform the driver's functionality, and obscure the execution path.

Can you write complicated business logic in a purely procedural manner? Probably. But the software would end up looking like a monstrous tangle of functions without key active elements in the domain clearly conceptualized, and maintenance would be a nightmare as a better understanding of the domain emerges.

Your post must assume that the people who argue 'for' object-oriented modeling argue for it as a solution to every problem. That is most certainly not the case. With the understanding that object-oriented modeling, or rather modeling in general, is appropriate for certain cases, your post demonstrates no "flaw" in object-oriented modeling at all. Either focusing on concepts or focusing on tasks is an appropriate approach given the context.

Thanks again for your post.

Christian Posta

levtraru's picture

Maybe it is not necessary to control or care about the whole execution flow.

Some of the keys of OOP are to identify responsibilities and to delegate.

When you send a message to an object and you don't get the desired result then maybe you are delegating into the wrong object or that object is not accomplishing its responsibility. In the first case you should correct the calling object to call another one; in the second case you can forget about the calling object and concentrate on correcting the called one.
Defining Unit Tests is very usefull to achieve this.

In OOP it is really important to care about design. Most of programmers learnt to program in a procedural way and then try to think the same way when programming object oriented.

OOP is not best or worst than Procedural Programming, it is just different.

Kind regards.

Asaf Shelly's picture

Hi Mark and Mauricio,

Thank you for the interesting comments and apologies for the delay.
I will answer these last to first (cache reasons :)

First of all let me start with declaring that I am not all hardware developer. Here are a few lines of OO code that I did in the past ten years:
http://www.asyncop.com/MTnPDirEnum.aspx?treeviewPath=%5bm%5d+Tools%5c%5b-%5d+WinModules

I agree that OO helps manage huge amounts of code. With that, I find a huge amount of problem with Window Media Player telling me that the video cannot be played because - the file is corrupt - or the website is down - or there is no Internet connection. This is because someone was such a good OO programmer that they completely ignored return values. This is so common that Exceptions are used to make programmers handle errors.. another bad bad thing that comes from OO paradigms...

My problem is not with OOP, it is with the paradigms surrounding it. Windows NT Kernel is fully OO - every driver is an object, and it is fully parallel. No one considered writing 2 to 4 lines of code per function in a device driver.

Sometimes you cannot reduce the number of threads to 1 because then you won't find the data races.. What if you have a race in function 23 in the call stack when it is run with function 15 on another thread's call stack?

You don't find flow-control bugs by running the application. You find these bugs by going over the flow diagram. OOD does not contradict flow diagrams but the methodologies used today with OO do not even mention it.

Only OOP can produce the term "Random Bug" which means "a bug that happens every now and then, not sure why". These bugs are "random" because they are related to flow control and only few OO expert have ever mentioned flow control as part of the system design.

Regards,
Asaf

anonymous's picture

"You can no longer follow the execution flow by reading the code."

This problem is not related to the programming paradigm, but to the size of the system. Decomposition provided by procedural programming and OO made bigger systems viable. Because procedures and OO came together with bigger systems one might think its the newer paradigms that are making code harder to read. Comparing a small assembler program to a large OO one is a mistake. Code a million lines of assembler and you will have a lot more trouble reviewing it than the equivalent any high level language, procedural or OO.

"The second problem with this model is the "Not my fault" syndrome".

It is again a problem related to the size of a system. You divide work between people if you want to finish a large project in a reasonable time. You reuse someone else´s code if you want to finish it earlier. You blame someone else when the project starts going wrong. It happens in any programming language with any paradigm. It DOES happen with assembler when you use CALL/RET statements.

"If you have multiple threads going over the same function they might all stop on the same breakpoint and you have no way of telling which is which effectively."

Good parallel architectures allows you to increase or decrease the amount of threads without changing the behaviour of the system, changing only the performance. So if you need to use breakpoints, just decrease the amount of threads to 1, debug, then increase it again.

Actually, OO helps managing parallelism better than any paradigm before it. The main headache in parallel programming is avoiding those hard-to-debug racing conditions. Racing conditions occurs when 2 or more pieces of code (instructions) tries to manipulate the same shared data at the same time. Since good OO design groups data and instructions that manipulate these data together, all the instructions that can interfere with each other stays together and can be handled as one.

If there is a problem with OO is that it requires quite an amount of experience to be used effectively and can be disastrous in the hands of the inexperienced. Combine a lot of gotos with polymorphism or a very tall inheritance structure and you will have a polymorphic spaghetti code that can be worse than any anti-pattern in older paradigms.

Pages

Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.