Dear Bruce,
At an early stage of my project I want to capture and evaluate performance (DMIPS; GPU; accelerators) and memory requirements (ROM; RAM, NVRAM) of software blocks regarding an embedded computing platform.
What are your suggestions and best practices to do this? Is it Systems Engineering, Software Engineering or HW/SW-interface? How can I trace these requirements with my architecture; there are no explicit use cases available for organization?
Thank you!
Best regards
Matthias
First off, let me say that performance of a subsystem or component has to do with the performance of something that it DOES (i.e. it's behavior) which should ultimately be derived from the requirements of what it SHOULD DO, and therefore it's use cases. Having said that, sometimes you start thinking about performance somewhere downstream of requirements, and so it must be dealt with later. But in principle, such thinking should start at the requirements level. So it is my belief that performance concerns should ALWAYS start at requirements and if you have requirements that do not include performance specifications but these concerns arise later, then you missed some requirements!
Performance properties can be associated with various kinds of behavior - state machine transitions, functions, primitive actions, and collections of more primitive behaviors. You needn't always specify performance detail at all levels of abstraction but in real-time systems there will be a need for such specifications at at least one or two levels of abstraction, usually architectural and detailed.
At the system level, you are specifying system properties, including functionality, behavior, and performance, After system hand off, those properties need to be decomposed and allocated to engineering disciplines (what the Harmony aMBSE process refers to as "engineering facets." Some of the performance concerns will be allocated to electrical aspects, some to mechanical, some to hydraulic, and some to software, in the typical case. Even if the hardware aspects are known a priori, the performance of those facets must be known to ensure that the overall performance is within the desired constraints. It often happens that the hardware is known and all the engineering work for a project is done in the software, but one must still understand the performance of that hardware to ensure that they system performance complies with the requirements.
I assume this is the case you mean when you talk about the difficulty of separating system engineering from downstream. This is a special case of the more general system development problem and doesn't create any additional difficulties. It is just important to remember the scope of concern when one is performing an engineering task, whether it is a systems or a software task.
BTW, a use case is a kind of Behaviored Classifier and can therefore also have performance properties associated with its behaviors.