SOURCES OF DISAGREEMENT Clause Samples

SOURCES OF DISAGREEMENT. While the PG&E evaluation committee and ▇▇▇▇▇▇ Seco Consulting did disagree on some specific decisions in the administration of the evaluation process, nearly all of these issues were resolved in the course of review. Issues underlying disagreements included: • ▇▇▇▇▇▇ disagreed with some of the PG&E team’s preliminary assignments of some Offers to local nodal areas or to pricing zones. After review and discussion, these disagreements were resolved, either through changes to the assignments or agreement that the assignments were correct. • ▇▇▇▇▇▇ disagreed with initial analyses in which PG&E assigned Resource Adequacy value to a few Offers that proposed to interconnect intermittent generation facilities outside the CAISO grid. Upon review, the PG&E team agreed that these Offers would not likely provide RA value to customers. • ▇▇▇▇▇▇ suggested that selection of Imperial Valley Offers with viability scores below PG&E’s viability cutoff would amount to a preference for Imperial Valley projects. Preferential treatment of such Offers was explicitly rejected for the 2009 RPS RFO in the CPUC’s Decision approving the 2009 procurement plans. Based on guidance from PRG members, PG&E chose to drop one such Offer from its draft short list; another failed to stay on the final short list. • PG&E made a preliminary selection of projects from two Developers that were not the Participant’s highest-valued Offers; upon review, and given feedback from PRG members and the IE, PG&E decided to select higher-valued Offers. • ▇▇▇▇▇▇’▇ Project Viability Calculator scores for many individual Offers varied considerably from the PG&E team’s scores. Upon comparison and discussion, PG&E revised its scores downwards for some Offers that it had included in a preliminary draft short list. This led the utility to decide to reject these Offers from the final short list. Similarly, ▇▇▇▇▇▇ was convinced by PG&E’s analysis to revise some of its Calculator scores upwards for Offers that PG&E had placed on the preliminary draft short list and to which ▇▇▇▇▇▇ had raised objections. • In the final short list, PG&E selected a few Offers that met its value cutoff but fell below the cutoff for viability. For most of these, ▇▇▇▇▇▇ concurred with the decision to short-list based on other considerations. • One Offer, described previously, was short-listed on the basis of achieving greater portfolio diversity by providing a proposed project with a different technology. The PG&E team scored this proposal a...
SOURCES OF DISAGREEMENT. ▇▇▇▇▇▇ disagreed with one aspect of how PG&E applied its methodology and with a few of the choices made in the selection process. Specific areas of disagreement included:
SOURCES OF DISAGREEMENT. To better understand the sources of disagreement, we calculated Kappa for the two following cases: 1. Combining the two middle categories of the adequacy scale (L and P). If there is confusion between these two categories, then it would be expected that agreement would increase when these two categories are combined. This results in a three category scale (F, [L,P], N). 2. Combining the categories at the ends of the scale (F and L, and P and N). If there is confusion between the F and L categories and the P and N categories, then it would be expected that agreement would increase when these categories are combined. This results in a two category scale ([F,L], [P,N]). The results from this analysis also depended on the process being assessed. For the processes that had perfect agreement, there is no confusion, therefore combining categories will have no effect. Of the remaining processes, eight increased their interrater agreement by combining rating categories, which also helps us identify potential confusion amongst categories. Processes ENG.3 and SUP.5 did not benefit from category combination. This may be due to the distribution of the responses (i.e., there was little variation in the data set) however rather than the existence of equal confusion amongst all of the categories. Processes ENG.2, ENG.4, ENG.5, ENG.7, and SUP.2 benefited substantially by reducing the four- point rating scale into a two-point rating scale. Process PRO.2 benefited substantially by reducing the four-point scale into a three-point scale. Finally, processes ENG.6 and ORG.2 benefited from scale reduction, but it seems that there was no particular category combination strategy that would be most useful. Based on an examination of the cell proportions in a 4x4 table for each of ENG.6 and ORG.2, it is evident that there is a similar amount of confuson between all of the adjacent categories of the four-point scale. These latter two would require further investigation to determine whether, for example, an alternative scale altogether may increase agreement or an improved definition of the categories on the scale would be sufficient.
SOURCES OF DISAGREEMENT. To better understand the sources of disagreement, we calculated Kappa for the two following cases: 1 Combining the two middle categories of the adequacy scale (L and P). If there is confusion between these two categories, then it would be expected that agreement would increase when these two categories are combined. This results in a three category scale (F, [L,P], N). 2 Combining the categories at the ends of the scale (F and L, and P and N). If there is confusion between the F and L categories and the P and N categories, then it would be expected that agreement would increase when these categories are combined. This results in a two category scale ([F,L], [P,N]). The results of these combinations are also shown in Figure 8. In both cases, the combination of categories increases the value of Kappa. However, for the data from the first study the two category scale results in a larger increase in agreement than the three category scale. This suggests that there is more confusion in rating practices at the end points of the adequacy scale for that process. Conversely, the difference between the two grouping strategies in study 2 are quite small (0.76 vs. 0.79). This indicates that there is possibly equal confusion of categories at the end points of the scale as at the middle points of the scale (according to the two grouping strategies presented above). Subsequently, we considered the extent of agreement at high capability levels when compared to low capability levels. From overall ratings summaries from the SPICE trials (see [14]), it was clear that there was a paucity of ratings better than "Not Adequate" for the Engineering and Project process categories at higher maturity levels. Since there are few instances rated highly at the higher capability levels, then this means that there is little knowledge about higher level generic practices. Consequently greater disagreement in higher capability ratings would be expected. High capability levels include levels 3 to 5. Low capability levels include levels 1 and 2. As can be seen in Figure 8, for the ENG.3 process there is not much difference in the Kappa values (0.46 vs. 0.44). However, for the PRO.5 process, agreement at the low capability levels is considerably larger than agreement at the high capability levels (0.72 vs. 0.54). This may be because assessors have less understanding of processes for managing quality at higher levels of capability (e.g., quantitative control and improvement of quality manageme...

Related to SOURCES OF DISAGREEMENT

  • Resolution of Disagreements Disputes arising under this Agreement will be resolved informally by discussions between Agency Points of Contact, or other officials designated by each agency.

  • Notice of Dispute If a Party claims that a dispute has arisen under this Agreement (“Claimant”), it must give written notice to the other Party (“Respondent”) stating the matters in dispute and designating as its representative a person to negotiate the dispute (“Claim Notice”). No Party may start Court proceedings (except for proceedings seeking interlocutory relief) in respect of a dispute unless it has first complied with this clause.

  • Dispute Notice If there is a dispute between the parties, then either party may give a notice to the other succinctly setting out the details of the dispute and stating that it is a dispute notice given under this clause 17.1.

  • Disagreement Any dissension between the parties other than a grievance defined in the agreement and other than a dispute defined in the Labour Code.

  • Notice of Disputes Notice of the dispute will be submitted on the form provided in Appendix A and sent to the responding party, in order to provide an opportunity to respond. The Crown shall be provided with a copy. a) Notice of the dispute shall include the following: i. Any central provision of the collective agreement alleged to have been violated. ii. The provision of any statute, regulation, policy, guideline, or directive at issue. iii. A comprehensive statement of any relevant facts. iv. The remedy requested.