Dear Bruce,
At an early stage of my project I want to capture and evaluate performance (DMIPS; GPU; accelerators) and memory requirements (ROM; RAM, NVRAM) of software blocks regarding an embedded computing platform.
What are your suggestions and best practices to do this? Is it Systems Engineering, Software Engineering or HW/SW-interface? How can I trace these requirements with my architecture; there are no explicit use cases available for organization?
Thank you!
Best regards
Matthias
First off, let me say that performance of a subsystem or component has to do with the performance of something that it DOES (i.e. it's behavior) which should ultimately be derived from the requirements of what it SHOULD DO, and therefore it's use cases. Having said that, sometimes you start thinking about performance somewhere downstream of requirements, and so it must be dealt with later. But in principle, such thinking should start at the requirements level. So it is my belief that performance concerns should ALWAYS start at requirements and if you have requirements that do not include performance specifications but these concerns arise later, then you missed some requirements!
Performance properties can be associated with various kinds of behavior - state machine transitions, functions, primitive actions, and collections of more primitive behaviors. You needn't always specify performance detail at all levels of abstraction but in real-time systems there will be a need for such specifications at at least one or two levels of abstraction, usually architectural and detailed.
At the system level, you are specifying system properties, including functionality, behavior, and performance, After system hand off, those properties need to be decomposed and allocated to engineering disciplines (what the Harmony aMBSE process refers to as "engineering facets." Some of the performance concerns will be allocated to electrical aspects, some to mechanical, some to hydraulic, and some to software, in the typical case. Even if the hardware aspects are known a priori, the performance of those facets must be known to ensure that the overall performance is within the desired constraints. It often happens that the hardware is known and all the engineering work for a project is done in the software, but one must still understand the performance of that hardware to ensure that they system performance complies with the requirements.
I assume this is the case you mean when you talk about the difficulty of separating system engineering from downstream. This is a special case of the more general system development problem and doesn't create any additional difficulties. It is just important to remember the scope of concern when one is performing an engineering task, whether it is a systems or a software task.
BTW, a use case is a kind of Behaviored Classifier and can therefore also have performance properties associated with its behaviors.
Dear Bruce,
additionally to my post above, I read in Real Time UML (3rd Edition), that UML SPT Profile specifies performance properties attached to classes, objects and sequences as well, not to use cases.
Your opinion? Thank you!
Best regards
Matthias
Hi Bruce,
thank you. Your suggestion is to start early with performance requirements at a high level during systems engineering. The requirements are refined with each further step and through technological decisions.
In my projects I have some challenges: Some performance requirements are largely related to architectural items (subsystems, blocks), they do not fit appropriately into a use cases. The second aspect is that I may need to be more detailed at system level. That means that it is difficult to separate downstream engineering from system level.
Your opinion? Thank you!
Best regards
Matthias
Ideally, performance requirements are identified and characterized during use case analysis and are further refined as technological decisions are made. Certainly, technology choices should be made with respect to optimizing the weight sum of all qualities of service - such as heat, weight, recurring cost and performance (such as worst-case or average). We do this in systems engineering trade studies when we decide how to allocation functionality to among engineering disciplines, and we do this in electronic, software, and mechanical designs as we decide how best to implement the requirements within a specific engineering discipline.
Performance should be a common thread though all engineering activities where performance is a critical factor, but it should begin in use case/requirements analysis. What I recommend is to first identify end-to-end performance requirements, and then work through a succession of refined performance budgets as functionality (and performance) are allocated in the architecture and design process. This is not solely an additive function because of the nature of concurrent and synchronous processing. In only the simplest cases is it additive. In general, you need to apply either queuing theoretic analysis or something like rate monotonic analysis to do the math. But the fundamental basis is the allocation of performance budgets through a series of increasingly detailed models.