Skip to main content
added 134 characters in body
Source Link

Most of those who adhere to "PMO" (the partial quote, that is) say that optimizations must be based on measurements and measurements cannot be performed until at the very end.

It is also my experience from large systems development that performance testing is done at the very end, as development nears completion.

If we were to follow the "advice" of these people all systems would be excruciatingly slow. They would be expensive as well because their hardware needs are much greater than originally envisaged.

I have long advocated doing performance tests at regular intervals in the development process: it will indicate both the presence of new code (where previously there was none) and the state of existing code.

  • The performance of newly-implemented code may be compared with that of existing, similar code. A "feel" for the new code's performance will be established over time.
  • If existing code suddenly goes haywire you understand that something has happened to it and you can investigate it immediately, not (much) later when it affects the entire system.

Another pet idea is to instrument software at the function block level. As the system executes it gathers information on execution times for the function blocks. When a system upgrade is performed it can be determined what function blocks perform as they did in the earlier release and those that have deteriorated. On a software's screen the performance data could be accessed from the help menu.

Check out this excellent piece on what PMO might or might no mean.

Most of those who adhere to "PMO" (the partial quote, that is) say that optimizations must be based on measurements and measurements cannot be performed until at the very end.

It is also my experience from large systems development that performance testing is done at the very end, as development nears completion.

If we were to follow the "advice" of these people all systems would be excruciatingly slow. They would be expensive as well because their hardware needs are much greater than originally envisaged.

I have long advocated doing performance tests at regular intervals in the development process: it will indicate both the presence of new code (where previously there was none) and the state of existing code.

  • The performance of newly-implemented code may be compared with that of existing, similar code. A "feel" for the new code's performance will be established over time.
  • If existing code suddenly goes haywire you understand that something has happened to it and you can investigate it immediately, not (much) later when it affects the entire system.

Another pet idea is to instrument software at the function block level. As the system executes it gathers information on execution times for the function blocks. When a system upgrade is performed it can be determined what function blocks perform as they did in the earlier release and those that have deteriorated. On a software's screen the performance data could be accessed from the help menu.

Most of those who adhere to "PMO" (the partial quote, that is) say that optimizations must be based on measurements and measurements cannot be performed until at the very end.

It is also my experience from large systems development that performance testing is done at the very end, as development nears completion.

If we were to follow the "advice" of these people all systems would be excruciatingly slow. They would be expensive as well because their hardware needs are much greater than originally envisaged.

I have long advocated doing performance tests at regular intervals in the development process: it will indicate both the presence of new code (where previously there was none) and the state of existing code.

  • The performance of newly-implemented code may be compared with that of existing, similar code. A "feel" for the new code's performance will be established over time.
  • If existing code suddenly goes haywire you understand that something has happened to it and you can investigate it immediately, not (much) later when it affects the entire system.

Another pet idea is to instrument software at the function block level. As the system executes it gathers information on execution times for the function blocks. When a system upgrade is performed it can be determined what function blocks perform as they did in the earlier release and those that have deteriorated. On a software's screen the performance data could be accessed from the help menu.

Check out this excellent piece on what PMO might or might no mean.

added 482 characters in body
Source Link

MyMost of those who adhere to "PMO" (the partial quote, that is) say that optimizations must be based on measurements and measurements cannot be performed until at the very end.

It is also my experience from large systems isdevelopment that performance testing is done at the very end, as the systemdevelopment nears completion.

If we were to follow the "advice" of these people all systems would be excruciatingly slow. They would be expensive as well because their hardware needs are much greater than originally envisaged.

I have long advocated doing performance tests at regular intervals in the development process: it will indicate both the presence of new code were(where previously there was none) and the state of existing code.

Newly-implemented code may be compared with existing, similar code. A "feel" for its performance will be established.

If existing code suddenly goes haywire you understand that something has happened to it and you can investigate it immediately, not (much) later when it affects the entire system.

  • The performance of newly-implemented code may be compared with that of existing, similar code. A "feel" for the new code's performance will be established over time.
  • If existing code suddenly goes haywire you understand that something has happened to it and you can investigate it immediately, not (much) later when it affects the entire system.

Another pet idea is to instrument software at the function block level. As the system executes it gathers information on execution times for the function blocks. When a system upgrade is performed it can be determined what function blocks perform as they did in the earlier release and those that have deteriorated. On a software's screen the performance data could be accessed from the help menu.

My experience from large systems is that performance testing is done at the very end, as the system nears completion.

I have long advocated doing performance tests at regular intervals in the development process: it will indicate both the presence of code were previously there was none and the state of existing code.

Newly-implemented code may be compared with existing, similar code. A "feel" for its performance will be established.

If existing code suddenly goes haywire you understand that something has happened to it and you can investigate it immediately, not (much) later when it affects the entire system.

Another pet idea is to instrument at the function block level. As the system executes it gathers information on execution times for the function blocks. When a system upgrade is performed it can be determined what function blocks perform as they did in the earlier release and those that have deteriorated. On a software's screen the performance data could be accessed from the help menu.

Most of those who adhere to "PMO" (the partial quote, that is) say that optimizations must be based on measurements and measurements cannot be performed until at the very end.

It is also my experience from large systems development that performance testing is done at the very end, as development nears completion.

If we were to follow the "advice" of these people all systems would be excruciatingly slow. They would be expensive as well because their hardware needs are much greater than originally envisaged.

I have long advocated doing performance tests at regular intervals in the development process: it will indicate both the presence of new code (where previously there was none) and the state of existing code.

  • The performance of newly-implemented code may be compared with that of existing, similar code. A "feel" for the new code's performance will be established over time.
  • If existing code suddenly goes haywire you understand that something has happened to it and you can investigate it immediately, not (much) later when it affects the entire system.

Another pet idea is to instrument software at the function block level. As the system executes it gathers information on execution times for the function blocks. When a system upgrade is performed it can be determined what function blocks perform as they did in the earlier release and those that have deteriorated. On a software's screen the performance data could be accessed from the help menu.

Source Link

My experience from large systems is that performance testing is done at the very end, as the system nears completion.

I have long advocated doing performance tests at regular intervals in the development process: it will indicate both the presence of code were previously there was none and the state of existing code.

Newly-implemented code may be compared with existing, similar code. A "feel" for its performance will be established.

If existing code suddenly goes haywire you understand that something has happened to it and you can investigate it immediately, not (much) later when it affects the entire system.

Another pet idea is to instrument at the function block level. As the system executes it gathers information on execution times for the function blocks. When a system upgrade is performed it can be determined what function blocks perform as they did in the earlier release and those that have deteriorated. On a software's screen the performance data could be accessed from the help menu.

Post Made Community Wiki by Olof Forshell