First let me thank you for considering my request. Currently there are three basic methods to pass parameters: by reference - with implicit strong type checking by reference - with loose typechecking (Keyword CONST) by value - with loose typechecking (Keyword VALUE) Another version is implemented when using data-structures, where CONST or VALUE keyword is not allowed, and which are obviously passed by reference, but without strong typechecking. The current parameter passing rules are not consistent and therefor typechecking-rules need to be normalized. A new method allowing datastructures to be passed by value would consequently be correct, but is most probably not necessary at the current stage. I consider the loose typechecking with CONST definitions to be the biggest problem. With the introduction of strong typechecking rules, CONST would allow the programmer to force the compiler to check the integrity of passed parameter values at compile time. Let's say you define a function to read customer data, which takes three parameters: companyid (3,0), customerid (10,0), customer-branch-id (3,0) ... Declaring the parameters without any keyword will cause the compiler to error during compile time if invalid values are passed. Declaring the pararmeter with CONST will only cause compile time errors if a value of a different type is passed. There is no checking on the precision side of the value. Declaring the parameter as VALUE makes no (or just internal) difference with CONST. And here is where errors creep in: CONST or VALUE will eventually cause runtime errors if a value too big is passed to fit in the receiver variable. In modernization efforts and when decoupling business logic from the database, it would be bad practice to declare program variables based on DB columns. The above explained scenario led to stupid error behaviour of a program, when implementing a new procedure to write values to a table by passing a rsange of numeric values to the new procedure and accidentally mixing the sequence of the parms. Besides that, errors like that are not easy to trap. The (e) opcode-extender will not work, most probably only the monitor block. Of course it is not practical to wrap every procedure all in monitor block. Another thing is datastructures used as parameters or return values alike. If you implement a programming pattern that makes use of events and event-handlers, you will be required to use data-structures as parameters (pretty much like .NET, which passes the sender and event-data within a class to the event-handler). Errors can easily creep in. Datastructures and strings are handled and accepted equally. I do understand that, for backwards compatibility, it is not possible to start requiring strong type checking with CONST, hence a new keyword is required. Currently there is no mechanism that will check, during compile time, the "footprint" of the memory portion that is passed to a proc. A STRONG keyword on dcl-pr level could cause the compiler to create a unique signature for a proc even when working with CONST and VALUE declared parameters. BTW: All the above mentioned could help to solve the current issue with the free-format CHAIN opcode as well. Currently a chain operation with a value bigger than the DB column simply crashes the execution if not monitored!