Scenario 3 Clause Samples
Scenario 3. PTC exploits the Programme Intellectual Property on a For-Profit Basis by retaining development/commercialization rights to Product in some regions of the World or with respect to some uses of the Product (either alone or in a collaboration with a Distributor or marketing/sales agent under which PTC retains overall control of commercialization)), and outlicenses the Product on an exclusive basis in other regions of the World or with respect to other uses of the Product.
a) In this scenario, any consideration from outlicensing (other than debt at arm’s length interest rates or bona fide research funding) shall be divided between the parties according to Base Shares as of effective date of the outlicense.
b) In addition, following such outlicense, PTC shall pay milestones and royalties based on scenario 1 for those regions of the World or uses of the Product for which it retains rights, subject to the following adjustments:
i) PTC will prepare a written proposal for adjustment to milestones and royalties based on its modeling of the relative values of market share outlicensed vs. market share retained by PTC.
(1) The Trust shall consider PTC’s proposal in good faith, and prepare a written counterproposal if it wishes;
(2) The parties shall negotiate in good faith for reasonable allocation of relative value of markets based on their proposals;
(3) If the parties cannot agree within [**] days, then the matter shall be referred for final determination via arbitration pursuant to Clause 19.3(a).
(4) Once the relative value of the markets outlicensed versus the markets retained by PTC is determined, PTC’s obligation to make continuing milestone and royalty payments pursuant to Scenario 1 shall be reduced according to relative value of markets outlicensed versus the markets retained. By way of example, if PTC outlicensed [**] of the market value of a Product, then a milestone payment of $[**] owed under scenario 1 would be reduced to a milestone payment of $[**] under this scenario 3, and a [**]% Net Sales royalty under scenario 1 would become a [**]% Net Sales royalty under this scenario 3.
Scenario 3. If you have a balance of £510, and: your credit limit is £500; your arrears are £50; the minimum payment we ask you for in your statement is £70; and there is a refund to your account of £200 between your statement date and your payment due date; then we will still require you to pay the full minimum payment of £70. We will use both the £200 refund and your £70 payment to reduce your balance.
Scenario 3. If the total of the Adjusted Amounts for all Claimants with Valid Claims is equal to the Net Settlement Fund, Claimants with Valid Claims shall be paid their Adjusted Amounts. If the total of the Adjusted Amounts for all Claimants with Valid Claims exceeds the Net Settlement Fund, then the Adjusted Amount for each Claimant with a Valid Claim shall be the Adjusted Amount decreased to a lower percentage, on a pro-rata basis, until the total of the Adjusted Amounts equals the Net Settlement Fund. In this event, the Adjusted Amounts for each Payment Group will be reduced in a manner that maintains the 10 / 5 / 3 ratio of percentages between Payment Groups 1, 2, and 3, respectively, as specified in Section 2.4.3 (i-iii) above. To illustrate, if the total of the Adjusted Amounts for all Valid Claims were twice the Net Settlement Fund, Claimants in Payment Group 1 would receive 50% of their Amounts Allegedly Withheld, Claimants in Payment Group 2 would receive 25% of their Amounts Allegedly Withheld, and Claimants in Payment Group 3 would receive 15% of their Amounts Allegedly Withheld.
Scenario 3 vulnerability in a critical device (ICS)
Scenario 3. For scenario 3, the selection was made in view of the fact, that the Carbon4PUR polyol production shall be constructed as an add-on to a steel mill, i.e., the place where the CO/CO2 waste gases emerge. Thus, the transport of these gases can be avoided. On the other hand the epoxide availability was set as non-mandatory. However, at least an olefin source (chemical site or olefin pipeline) had to be near the steel mill. The distance between the CO source and the olefin source was allowed to be up to 30 km. With these inclusion criteria, five European regions have been identified to be feasible replication sites. These regions are: The Port of Marseille, which has already been identified in scenario 1 and 2. This location is for sure the most attractive option from the point of view of the Carbon4PUR project partners, as all considerations and studies are focussed on this location, where the industrial partners are co-located. The nominal annual polyol production capacity can be more than 5-fold (277 kt/a) compared to the intended capacity. Both CO/CO2 gas streams and the needed epoxides are available and there is no need to construct an olefin-to-epoxide oxidation plant. The ArcelorMittal steel mill in the region of Zeeland (Terneuzen/Gent) has a nominal annual polyol production capacity, which is about 25% higher than the ArcelorMittal FOS steel mill at the Port of Marseille. However the distance to the next epoxide source, i.e., DOW Benelux N.V., is about 18 km. The Duisburg/Essen region has the highest nominal annual polyol production capacity of 685 kt/a and 433 kt/a with emissions from Hüttenwerke ▇▇▇▇▇ Mannesmann GmbH and thyssenkrupp Steel Europe ▇▇ ▇▇▇▇ Schwelgern, respectively. However, the epoxides are only available at a distance of about 45 km, where the Covestro Deutschland AG polyol plant is located. On the other hand, olefins would be available at shorter distance (5-20 km) from the nearby pipelines. However, this would require the construction of an olefin-to-epoxide oxidation plant. The regions of Amsterdam and Hall are ranked at the lowest within this scenario, as the distance between CO and olefin source are 25 km and 32 km, respectively. As there are no epoxide sources in the near vicinity, the construction of an olefin-to- epoxide oxidation plant would be necessary. Scenarios 2 and 3 are hard to rank against each other. On the one hand, scenario 2 has selected only sites where the epoxides are already available. Thus, the constr...
Scenario 3. Same population, different data sources
(a) One refers to the situation when the same variables or statistics are being estimated by pooling together multiple sources, such as two sample surveys on the same topic, two different types of surveys but with a common subset of variables (such as household income in income surveys versus income in budget/expenditure surveys), or two sources of different types but providing information on a common set of variables (for example, income from interviews versus from administrative sources). In such situations, the pooling essentially involves aggregation by giving weights to different sources in proportion to their expected degrees of reliability. An example of this category can be found in Di Marco (2006).
(b) The second type of situation involves pooling of substantively different types of data or indicators so as to construct more complex, composite indicators. The different type of data may come from different sources, or from different parts of the same source - they may even refer to the same individual units at the micro level. Typically, the pooling involves the construction of new variables or estimates for a given sample, rather than of the same measures over different samples. A good example is provided by the construction of indicators of multi-dimensional deprivation from indicators of monetary and non-monetary aspects of poverty (see ▇▇▇▇▇ et al. (2006)).
Scenario 3. TCCA has issued a design approval to a Canadian DAH and EASA validation is in progress. The procedure for continued validation is:
a. The Canadian DAH will apply for a CAA validation pursuant to the TIP, and provide the CAA with the same documentation and data package provided to EASA; and
b. At the discretion of the Canadian DAH, validation may be completed under one of the following alternatives;
(i) If the current EASA validation activity for that application will result in an EASA validation design approval issued no later than December 31, 2022, the CAA will take into consideration the completion of the EASA validation activity and issuance of EASA design approval as a basis for the CAA to issue its own approval without further technical involvement by the CAA, Or
(ii) If the EASA validation will not be completed by December 31, 2022, the CAA and TCCA will mutually recognize and accept all EASA and TCCA validation decisions made to date and continue to follow or maintain the EASA/TCCA project validation plan to the greatest extent practicable. TCCA and the CAA will follow the validation procedures in the TIP that are applicable to the remaining parts of the project.
Scenario 3. NN’s payment to IPH of royalties on Net Sales of any Niche Candidate with respect to which NN has exercised its Buy-In-Option and IPH has exercised its opt out option pursuant to Subsection 6.5.2 shall be as follows:
Scenario 3. Scenario 3 describes 5 km x 5 km region of ▇▇▇▇▇▇ in Berlin which is typical residential area. The expected volume of data traffic in this region is estimated relative low compared with other scenarios. Therefore, MCS density refers the lowest value respectively ~0.56 MCSs/km2 . Accordingly, 260 SCSs are considered to be deployed in this scenario resulting in low density of ~10 SCSs/km2.
Scenario 3. With the new Tomcat and Native library installed and the considerably improved performance it was time to run the experiments with more data. However, it must be remembered that the non- finishing workflows is still an issue. However, Taverna developers run the experiment several times without being able to reproduce the “waiting for data” messages. This could mean there’s a problem in the local Taverna being used to run workflows in the PC at UPF. Taverna support team recommended using Taverna 2.3.0 and also try using the command line instead the graphical interface. One very important advantage of the command line tool to run workflows is that results are written in a directory during the execution so there’s no need to wait until the end. This way, the problem with saving results in the graphical workbench is avoided. New experiments with 2k, 3k, 5k and 10k documents (1k files is around 1M words) showed that Taverna must be used without the parameter “in-memory” activated for such large amount of data or it will consume all PC memory resources. The experiments with 5k documents took between 110 and 200 minutes to be run. Unfortunately, it was always necessary to cancel execution for the workflow to finish. The “waiting for data message” appears on a few executions (1/10) leaving ▇▇▇▇▇▇▇ without knowing when the workflow ends and waiting for results that never arrive. We expect to solve this problem in collaboration with Taverna support team. Experiments with 10k documents also suffered from the non-finishing execution problem. They took between 5 and 7 hours and were useful to find another situation that need to be solved. Some Linux file systems can only have 32k folders in one directory. Soaplab server stores all temporary data files in the same folder. If a workflow with three Soaplab web services deployed on the same server is run with 10k input files there are going to be needed 30k folders. If there were already some temporary files there the limit is really easy to be reached causing the systematic failure of all new executions. To solve this 32k limit Soaplab developers have been contacted to improve the temporary files management. If there is not a solution soon a script to automatically erase temporary files when the 32k limit is near will be developed and deployed.