英语论文翻译_英语论文及翻译

2020-02-29 其他范文 下载本文

英语论文翻译由刀豆文库小编整理,希望给你工作、学习、生活带来方便,猜你可能喜欢“英语论文及翻译”。

Reliability Survey of Military Acquisition Systems Jonathan L.Bell, Ph.D., Institute for Defense Analyses Matthew R.Avery, Ph.D., Institute for Defense Analyses Michael C.Wells, Institute for Defense Analyses

Key Words: Survey, Department of Defense, Military, Reliability

SUMMARY & CONCLUSIONS

Improving the reliability of military systems within the Department of Defense(DoD)is a key priority.Test results from the last few decades indicate that the DoD has not yet realized significant statistical improvements in the reliability of many systems.However, there is evidence that those systems that implemented a comprehensive reliability growth program are more likely to meet their development goals.Reliable systems cost le overall, are more likely to be available when called upon, and enable a longer system lifespan.Reliability is more effectively and efficiently designed‑in early(design for reliability)vice being tested‑in late.While more upfront effort is required to build reliable systems, the future savings potential is too great to ignore.At the request of the Director, Operational Test and Evaluation(DOT&E), the Institute for Defense Analyses(IDA)has conducted annual reliability surveys of DoD programs under DOT&E oversight since 2009 to provide a continuing understanding of the extent to which military programs are implementing reliability-focused DoD policy guidance and ae whether the implementation of this guidance is leading to improved reliability.This paper provides an aement of the survey results.Overall survey results support the understanding that systems with a comprehensive reliability growth program are more likely to meet reliability goals in testing.In particular,the results show the importance of establishing and meeting Reliability, Availability, and Maintainability(RAM)entrance criteria before proceeding to operational testing(OT).While many programs did not establish or meet RAM entrance criteria, those that did were far more likely to demonstrate reliability at or above the required value during OT.Examples of effective RAM entrance criteria include(1)demonstrating in the last developmental test event prior to the OT a reliability point estimate that is consistent with the reliability growth curve, and(2)for automated information systems and software-intensive sensor and weapons systems, ensuring that there are no open Category 1 or 2 deficiency reports prior to OT.There is also evidence that having intermediate goals linked to the reliability growth curve improves the chance of meeting RAM entrance criteria.The survey results also indicate that programs are increasingly incorporating reliability-focused policy guidance,but despite these policy implementation improvements, many programs still fail to reach reliability goals.In other words, the policies have not yet proven effective at improving reliability trends.The reasons programs fail to reach reliability goals include inadequate requirements, unrealistic aumptions, lack of a design for reliability effort, and failure to employ a comprehensive reliability growth proce.Although the DoD is in a period of new policy that emphasizes good reliability growth principles, without a consistent implementation of those principles, the reliability trend will likely remain flat.In the future, programs need to do a better job incorporating a robust design and reliability growth program from the beginning that includes the design for reliability tenets described in the ANSI/GEIA-STD-0009, “Reliability Program Standard for Systems Design, Development, and Manufacturing.” Programs that follow this practice are more likely to be reliable.There should be a greater emphasis on ensuring that reliability requirements are achievable, and reliability expectations during each phase of development are supported by realistic aumptions that are linked with systems engineering activities.Programs should also establish RAM entrance criteria and ensure these criteria are met prior to proceeding to the next test phase.A program’s reliability growth curves should be constructed with a series of intermediate goals, with time allowed in the program schedule for test-fix-test activities to support achieving those goals.Finally, when sufficient evidence exists to determine that a program’s demonstrated reliability is significantly below the growth curve, that program should develop a path forward to addre shortfalls and brief their corrective action plan to the acquisition executive.INTRODUCTION

DOT&E is the principal staff aistant and senior advisor to the Secretary of Defense on operational test and evaluation(OT&E)in the DoD.DOT&E oversees major DoD acquisition programs to ensure OT&E is adequate to confirm operational effectivene and suitability of the defense system in combat use [1].Data from DOT&E reports to congre suggest that despite establishment over the years of policies intended to encourage development of more reliable systems, DoD system reliability has not improved.From 1997 to 2013, only 56 percent of the systems that underwent an OT met or exceeded their reliability threshold requirements [2].Further analysis suggests there has been no improvement in the fraction of programs meeting their reliability requirements over time.To better understand these trends, DOT&E requested IDA to conduct a survey of military programs in each of the past five years to determine the extent to which reliability-focused policy guidance is being implemented and to ae whether it is leading to improved reliability.IDA developed a survey and distributed it to research staff members that are subject matter experts on the programs of interest.Survey topics included questions on the program’s reliability growth plan,plans for tracking reliability during development, whether the program has a proce of calculating the reliability growth potential, and questions on reliability performance in OT.Select survey questions are listed in Table 1.For most questions, respondents were required to answer “yes,” “no”, or“unknown.”

Respondents were also provided with opportunities to enter comments for each question.The most recent survey was conducted in 2013 and focused on programs that submitted a Test and Evaluation Master Plan(TEMP)to DOT&E or had an OT in FY 2012.The TEMP is the overarching document that describes the program’s test plan [3].1.1 Survey Analysis Approach

Analysis of each survey question considered how the responses varied by time by comparing responses in the most recent survey to the earlier surveys by TEMP date.Duplicate survey entries between surveys were removed.The analysis also considered differences by lead service including the Army, Navy, and Air Force(Marine Corps responses were grouped with the Navy), and by acquisition phase.The analysis binned the responses using the following TEMP date categories to maintain consistency with the methodology used in previous survey analyses: •

Dated before July 2008, prior to approval of a key DoD reliability policy(75 responses)•

Dated between June 2008 and October 2010(81 responses)•

Dated in FY 2011(57 responses)•

Dated in FY 2012 or FY2013 13(52 responses).Where appropriate, contingency tables were used to record and analyze the relationship between two or more categorical variables.This allowed the determination of whether the observed results were statistically significant.1.2 Population of Survey Responses

IDA analysts completed 97 responses in the most recent reliability survey conducted in 2013.Of the 97 responses, 52 were for programs that had an FY 2012 or 2013 TEMP, 66 were for programs that had an FY 2012 OT, and 7 were for programs that did not have an FY 2012 or 2013 TEMP or OT.Of the 66 programs with an FY 2012 OT, 28 also had an FY 2012 or 2013 TEMP.Table 2 shows the breakdown of responses by acquisition phase, lead Service, and test type.Approximately 63 percent of systems represented by survey responses were past their Initial Operational Test(IOT).SURVEY RESULTS

Overall results, based on analysis of survey responses and user comments, reinforce the understanding that systems with a robust reliability growth program are more likely to reach reliability goals.In particular, analysis results revealed the importance of establishing RAM entrance criteria and intermediate goals that are linked to the reliability growth curve.As shown in Table 3, programs that establish and meet their RAM entrance criteria are more likely to demonstrate reliability at or above the required value during OT.Examples of effective RAM entrance criteria include(1)demonstrating, in the last DT event before the IOT&E, a reliability point estimate that is consistent with the reliability growth curve, and(2)for automated information systems, ensuring that there are no open category 1 or 2 deficiency reports prior to OT [4]

Of the 15 programs in Table 3 that established and met their RAM entrance criteria in DT, 13 met their reliability goals in OT.None of the seven programs that failed to meet their entrance criteria in DT went on to meet their reliability thresholds in OT.The Pearson p-value in shown in Table 3 indicates that this result is statistically significant.This result suggests that programs that do well in DT are more likely to so well in later OT.However, despite this obvious result,many programs do not establish RAM entrance criteria, and programs that fail to meet entrance criteria in DT are still permitted to move forward and participate in OT.This result confirms that moving programs forward that perform poorly in DT increases the risk they will fail to reach reliability thresholds in OT.There is also evidence that programs that have intermediate goals that are linked to the reliability growth curve are more likely to meet their RAM entrance criteria ahown in Table 4.Overall results also suggest that implementing RAM policies alone, without the support of a robust reliability growth program, is insufficient to improve the chance of succe in OT.Analysis of responses collected in 2013 for programs that had an IOT&E or FOT&E provide no significant evidence that implementation of RAM policies alone improves the chance of demonstrating RAM threshold during OT.As shown in Table 5, there was no single policy area that could be correlated with succe in OT.In fact, a smaller fraction of programs with growth curves met their RAM entrance and exit criteria compared to programs that do not have reliability growth curves.User comments report a variety of reliability growth plan inadequacies such as requirement deficiencies, policy implementation concerns, and testing limitations.For example, some respondents commented that reliability growth curves were constructed as an afterthought,retrofitted in the TEMP only after DOT&E requested information on it.In these instances, the construction of reliability growth curve was to comply with a paper policy, rather than to reflect systems engineering activities.Other respondents indicated that the reliability requirements were not achievable, because they were based on faulty modeling aumptions or they were unrealistically high compared to similar system.Finally, some respondents commented that there was insufficient testing in OT to evaluate the reliability requirement or the reliability growth model inputs were not based on realistic aumptions.Consistent with the result of previous surveys, survey responses collected in 2013 provide no evidence of improvement in the percentage of programs that met their RAM entrance or exit criteria.Compared to other types of OT, FOT&Es had the highest fraction of programs that met their exit criteria or demonstrated reliability above the requirement(Figure 1).This suggests that many programs do not reach their reliability goals until after fielding.2.1 Comparison of Responses by TEMP Date Analysis of responses shows that the fraction of programs that implement reliability-focused policy guidance continues to improve.Areas of continuous policy implementation improvement over time included the following: •

Having a reliability growth(RG)strategy •

Documenting reliability RG in the TEMP •

Incorporating RG curves into the TEMP •

Having a proce for calculating RG potential.The results for these questions are listed in Table 6 for known “Yes” or “No” responses.Analysis results suggest that the improvement over time is statistically significant at the 90 percent confidence level.As shown in Table 7, the fraction of FY 2012 or 2013 TEMP programs that use the reliability growth curve to develop intermediate goals improved(59 percent)compared to FY 2011 TEMP programs(48 percent), but remained below the fraction observed for programs with TEMPs approved between June 2008 and October 2010(73 percent).The fraction of FY 2012 or 2013 TEMP programs that use reliability metrics to ensure growth is on track to achieve requirements also increased, reaching a higher percentage than that observed for older TEMP date categories.The fraction of programs that have reliability growth curves has remained relatively constant over time.Approximately 60 percent of programs with FY 2012 or 2013 approved TEMPs link their reliability growth goal to an OT event.2.2 Differences Acro Lead Services Among programs with FY 2012 or 2013 TEMP approvals, all Services are generally following guidance to: •

Establish a reliability growth or improvement strategy and describe it in the TEMP •

Incorporate reliability growth curves into the TEMP •

Use reliability metrics to ensure growth is on track to achieve requirements.Army and Navy programs show improvement in implementing the following RAM policies: •

Establishing a reliability growth or improvement strategy(since July 2008, more than 80 percent of Air Force programs have had a reliability growth or improvement strategy)•

Having reliability growth curves and documenting them in the TEMP •

Calculating reliability growth potential.A larger fraction of Army and Navy programs with FY 2012 or 2013 TEMPs establish and link intermediate goals to the reliability growth curve compared to the Air Force.As shown in Figure 2, Army programs were more likely to link reliability growth goals to OTs compared to the other Services.3 RECOMMENDATIONS

Survey results suggest that military systems should carry out the following activities to improve their chance of meeting reliability requirement in OT: •

Establish OT entrance criteria and ensure these criteria are met prior to proceeding on to the next test phase.•

In accordance with existing USD(AT&L)policy, ensure that that reliability growth curves are stated in a series of intermediate goals and tracked through fully integrated, system-level test and evaluation events until the reliability threshold is achieved.•

Ensure that reliability growth curve aumptions are based on realistic inputs from systems engineering.•

Review the adequacy of requirements to ensure they are achievable.•

Updating reliability growth curves as needed.•

Ensure that enough test time is resourced to support an evaluation of the reliability requirement(s).REFERENCES 1.Title 10, United States Code, Section 139, “Director Operational Test and Evaluation.”

2.“Director, Operational Test and Evaluation FY 2013 Annual Report,” January 2014.3.“Defense Acquisition Guidebook,” section 9.5.5, October 2012.4.DOT&E Memo to the USD(AT&L), “Reliability Survey of Select Acquisition Programs on DOT &E Oversight,”October 30, 2013.BIOGRAPHIES Jonathan L.Bell, PhD Institute for Defense Analyses Operational Evaluation Division 4850 Mark Center Drive Alexandria, VA 22310 USA.e-mail: jlbell@ida.org Jonathan L.Bell is a Research Staff Member at the Institute for Defense Analyses(IDA).He earned his doctoral degree in Materials Science and Engineering at the University of Illinois at Urbana Champaign in 2008 and his bachelor’s degree in Materials Science at Carnegie Mellon University.Dr.Bell’s work at IDA is focused on operational testing of ground vehicle systems with a specific emphasis on reliability.Matthew R.Avery, PhD Institute for Defense Analyses Operational Evaluation Division 4850 Mark Center Drive Alexandria, VA 22310 USA.e-mail: mavery@ida.org Matthew Avery, a Research Staff Member at the Institute for Defense Analyses(IDA), earned a Master of Science and doctoral degree in Statistics from the North Carolina State University.Dr.Avery’s work at IDA focuses on statistical aspects of operational test and evaluation.Michael C.Wells Institute for Defense Analyses Operational Evaluation Division 4850 Mark Center Drive Alexandria, VA 22310 USA.e-mail: mwells@ida.org Michael C.Wells, a Research Staff Member at the Institute for Defense Analyses(IDA), earned a Master of Science degree in Operations Research from the University of California, Berkeley, a Master of Science in Information Management from Marymount University, and a bachelor’s degree from the United States Military Academy.Mr.Wells joined IDA following 24 years of service with the US Army.Mr.Wells’ work at IDA has focused on operational testing of Army fire.军事采集系统可靠性的调查

Jonathan L.Bell博士,国防分析研究院研究员 Matthew R.Avery博士,国防分析研究院研究员 Michael C.Wells, 国防分析研究院研究员

关键词:调查,国防部,军事,可靠性

摘要和结论

改善军事系统内部的可靠性一直是国防部(DoD)优先的关键项目。从过去几十年的测试结果表明国防部尚未意识到利用可靠性统计来改进许多系统的重要性。然而,有证据表明,那些系统实现了全面的可靠性增长的程序更容易满足他们的发展目标。可靠的系统总体成本更低,使用时更可靠,系统寿命也更长。在早期设计(可靠性设计)和后期测试中可靠性是更具有效性和高效性的。虽然需要更多的前期努力来构建可靠性系统,但是其在未来巨大的潜力却是不容忽视的。

在主管作战测试和评估部门(DOT&E)的要求下,国防分析研究院(IDA)自2009年以来已经在国防部监督下每年进行可靠性调查,在国防部政策指导下不断深入的提供以可靠性为中心而实施的军事计划,和评估是否实施该计划。从而提高了军事计划的可靠性。本篇论文提供了一份评估调查结果。

整体调查结果显示对系统全面的可靠性增长计划更容易在测试中实现可靠性目标。特别的,调查结果表明在进行测试之前建立可靠性、可用性和可维护性(可靠性、可用性和可维护性)的标准的重要性。虽然许多程序没有建立满足可靠性、可用性和可维护性的标准,但是这却更容易在测试中展示达到或超可靠性过所需要的标准值。有效的可靠性、可用性和可维护性标准的例子包括(1)证明在过去测试事件之前不可靠性点的估计和可靠性增长曲线是一致的,(2)自动信息系统、软件集成传感器和武器系统,确保在没有开始测试前存在1或2个缺陷报告。也有证据表明,可靠性增长曲线的中间目标有提高满足可靠性、可用性和可维护性标准的机会。本次调查结果还表明,越来越多的方案以可靠性为重点,尽管这些方案实施了改善,但很多方案仍未能达到可靠性目标。换句话说,这些方案尚未证明在提高可靠性的趋势方面有效。计划无法达到的可靠性目标的原因包括:需求不足,不切实际的假设,缺乏可靠性设计工作,并没有采用一个全面的可靠性增长过程。虽然美国国防部在一段时期内很好的强调了可靠性增长的原则,但却没有始终如一地贯彻这些原则的政策,所以系统的可靠性趋势可能会保持不变。

在未来,每个方案都需要结合强大的设计和可靠性增长计划。按照ANSI/ GEIA-STD-0009中描述的可靠性设计原则,“可靠性项目标准系统设计,开发,与制造。”来开始一个更好的工作。而遵循这一做法的方案才更有可能是可靠的。一个更加强调确保可靠性的要求应该是可以实现的,而且在开发的每个阶段的可靠性的期望是由与系统工程活动相联系的假设来支持的。计划还应该建立可靠性、可用性和可维护性的标准,并确保这些条件都满足之后才继续进行下一个测试阶段。系统的可靠性增长曲线应该是由一系列中间目标构成的,同时安排测试修复等测试活动,来确保实现这些目标的时间。最后,当有足够的证据能证明系统的可靠性明显的在增长曲线之下时,该系统应制定一个改进方案,以解决不足之处,并且纠正其执行计划。.导言

主管作战测试和评估部门是国防部主要的参谋助手,高级顾问,在国防部是国防部长进行作战测试和评估的手段。主管作战测试和评估部门主要负责国防部采办项目,以确保测试和评估足以验证防御系统的作战有效性和适宜性。从主管作战测试和评估部门向国会报告中的数据显示,尽管成立多年来旨在鼓励更多系统发展可靠性的政策,但国防部系统的可靠性并没有得到改善。从1997年到2013年,该部门的测试系统的只有56%达到或超过其可靠性门槛的要求[2]。进一步的分析表明,出现了在一段时间内他们可靠性要求的方案没有得到改善。

为了更好地来理解这些趋势,主管作战测试和评估部门在国防分析研究院的要求下进行军事项目的调查,在过去五年来确定以可靠性为重点的政策正在实施,以评估其能够提高可靠性的程度。国防分析研究院制定了调查计划,并分发给研究人员和对项目感兴趣的专项专家。调查内容包括对测试收集可靠性表现和在程序中可靠性增长计划问题,计划和在开发过程中跟踪的可靠性,该方案是否有可以计算的可靠性增长潜力的过程,问题列于表1。对于大多数问题,要求受访者回答“是”,“否”或“不知道”。受访者的意见也提供了研究人员发现每个问题的机会。

最近的调查是在2013年进行,调查集中在提交测试和评估总体规划,首要文档是主管作战测试和评估部门进行的FY 测试和评估总体规划,描述系统的测试计划的方案[3]。

1.1调查分析方法

分析每个响应是如何变化的,通过在最近的一项调查测试和评估总体规划日期比较早前调查的每个问题,被调查项目之间的重复调查项。分析还考虑到每个优先服务的差异,包括陆军,海军和空军(海军陆战队应答分组在海军),并通过采集不同响应的阶段。

使用下面的测试和评估总体规划日期类别,以保持与以往调查所用的分析方法和分析分级响应的一致性:

•2008年7月之前,以前国防部的一个关键可靠性政策的批准(75答复)•2008年6月至2010年10月之间(81答复)•时间点为2011财年(57答复)

•时间点为2012财年和2013财年(52答复)。

在适当情况下,用列联表来记录和分析两个或更多个分类变量之间的关系。这使得观察到的统计结果变得明显的更有意义。

1.2人口调查答复

国防分析研究院的分析师在2013年完成的最新97份答复中所进行的可靠性的调查,2012财年和2013年测试和评估总体规划有52份答复,2012财年测试项目有66份答复,2012年和2013 测试和评估总体规划有6份答复。表2显示了回应采集阶段,系统服务和测试中63%的受调查答复结果代表了系统已经通过了初始作战测试(IOT)。

2.调查结果

在强调整体效果的基础上,对调查结果和用户的意见进行分析,加强对调查结果的理解。即只有拥有强大的可靠性增长项目的系统才更容易达到可靠性目标。特别是,分析结果显示建立的可靠性、可用性和可维护性标准和链接到所述可靠性增长曲线的中间目标的重要性。如表3所示,即建立和满足他们的可靠性、可用性和可维护性标准的方案更可能展示出可靠性等于或高于测试期间所需的值。有效的可靠性、可用性和可维护性标准例子包括:(1)证明在过去测试事件之前不可靠性点的估计和可靠性增长曲线是一致的和(2)自动化信息系统,确保在没有开始测试前存在1或2个缺陷报告。

即建立并满足其可靠性、可用性和可维护性的标准,计划表3中,展示了满足测试系统的可靠性目标和未能满足测试系统的标准。包括测试在内的七个方案,建立了满足其可靠性门槛的标准。在表3中所示的在p值表示此结果在统计学上是显然成立的。这一结果表明,做好前面的测试计划才更有可能做好后面的测试计划。然而,有一种明显的现象,很多的程序并没有建立可靠性、可用性和可维护性标准,而这些不符合标准的项目在测试仍然被继续允许存在,参加测试项目。这个结果证实,在测试中继续增加这些不符合标准的项目,系统将无法达到在测试中的可靠性阈值,也有证据表明,具有能链接到可靠性增长曲线的中间目标的项目是可能满足其在可靠性、可用性和可维护性中的标准的。

总体结果还表明,单独的实施关于可靠性、可用性和可维护性的政策,却没有一个强大的可靠性增长项目的支持,并不足以提高测试成功的机率。在2013年收集的信息中即有一个初始作战评估和在中期作战评估没有提供显著的证据证明下,实施的可靠性、可用性和可维护性政策仅提高了测试期间与规定可靠性、可用性和可维护性标准的相符合。如表5中所示,有可能是与在测试中没有成功的区域相关联所导致的。事实上,不具有可靠性增长曲线的程序与拥有可靠性增长曲线的系统相比,不具有可靠性增长曲线的程序只有较小的部分满足它们的可靠性、可用性和可维护性的标准。用户反馈的报告也暴露了可靠性增长计划的各种不足之处,如实验条件不足,对政策实施的担忧,测试受到各种限制等等。

例如,一些受访者回复说,当可靠性增长曲线构造后,而且在进行改造后,从主管作战测试和评估部门要求下就可以知道关于测试和评估总体规划的信息。在这种情况下,可靠性增长曲线的结构只是遵从表面上的要求,而不是反映系统真实的工程活动。其他受访者表示,可靠性要求都无法很好的实现,因为它们是基于错误的假设,只是建立类似的系统模型。最后,一些受访者评论说,没有经过足够可靠性要求的测试,其可靠性增长模型不是基于现实的假设来进行评估的。

与以前的调查结果相比,在2013年收集的调查结果表明没有提供满足其可靠性、可用性和可维护性标准的改善方案。相比其他类型的测试,后期作战测试是在满足或超过它们要求的标准的表现出可靠性最高的测试(图1)。这表明,许多项目最初并没有达到它们的可靠性目标,直到调整后才达到其可靠性目标。2.1在测试和评估总体规划中响应的比较

响应分析表明,实施的以可靠性为重点政策的方案得到了改善。持续实施的政策随着时间的推移改进着每个领域。包括以下内容: •拥有一个可靠性的增长(RG)策略

•将可靠性RG在测试和评估总体规划中进行文档化 •将RG曲线代入测试和评估总体规划 •具有用于计算的RG潜在值的方法。

总结,这些问题被列在表6中已知的“是”或“否”的响应中。分析结果表明,随着时间的推移大约有90%的方案得到了改善。

如表7中所示,与2011财年的测试和评估总体规划计划(48%)相比,2012财年和2013年的测试和评估总体规划(59%)其可靠性增长曲线的发展中期目标比例有明显的提高 但仍观察到其计划的比例还是低于2008年6月到2010年10月之间的测试和评估总体规划(73%)。提高其使用的可靠性指标,以确保2012财年和2013年测试和评估总体规划的百分数实现增长,达到一个比以前的测试和评估总体规划百分数更高的比例。

具有可靠性增长曲线的方案一直保持在相对稳定的状态。2012和2013年大约有60%的FY计划通过了格林尼链接的可靠性增长目标的测试。2.2差异性的服务

2012财年和2013年审批的测试和评估总体规划程序,所有程序的要求都一般如下:

•建立一个可靠的增长或改善的策略,并说明它在测试和评估总体规划的意义 •将可靠性增长曲线计划入测试和评估总体规划中

•使用合理的的可靠性指标,以确保可靠性的增长要求是有望实现的。陆军和海军的计划显示了他们将改善实施以可靠性、可用性和可维护性为重点的政策: •建立一个可靠性的增长和改善策略(自2008年7月,有超过80%的空军计划建立可靠的增长和改善的策略)•具有可靠性增长曲线,记录他们在测试和评估总体规划中的曲线 •计算可靠性增长潜力。

与2012财年和2013财年的陆军和海军计划建立的项目和中间目标链接到可靠性增长曲线相比,空军如图2,与其他计划相比陆军的方案更能够接近可靠性增长曲线的目标。3.建议

调查结果表明,军事系统应该进行以下改进,以提高其满足可靠性要求的水平: •建立测试的标准,并确保前面的测试阶段满足这些标准之后,才继续到下一个测试阶段。

•根据现行美元(AT&L)的策略,确保该可靠性增长曲线表示其相关联的中间目标,并通过完全集成的系统级测试和评估事件跟踪,直到达到可靠性阈值。•确保可靠性增长曲线的假设是基于系统工程的现实状况。•检查要求是否达成,以确保它们是可以实现的 •根据需要随时更新可靠性增长曲线。

•确保足够的测试时间和资源来支持的可靠性要求(S)的评价。参考资料

1.标题10,美国法典第139条,“导演作战试验和评估” 2.“导演作战试验与评估2013财年年度报告”,2014年1月 3.“防务采办指南,”9.5.5节,2012年10月

4.主管作战测试和评估部门备忘录(AT&L),“选择采办项目上主管作战测试和评估部门监督,可靠性调查”,2013年10月30日 作者简介

乔纳森·L·贝尔博士国防分析研究院业务评价司4850马克中心传动 亚历山大,VA22310 USA 电子邮件:jlbell@ida.org 乔纳森·L·贝尔,一名在国防分析研究院(IDA)工作的研究人员。他在2008年获得伊利诺伊大学香槟分校的材料学和工程学博士学位,并在卡内基·梅隆大学获得材料学学士学位。贝尔博士在IDA工作的重点是地面车辆系统与特定的强调可靠性运行测试 马修·R·艾弗里博士 国防分析研究院业务评价司4850马克中心传动 亚历山大,VA22310 USA 电子邮件:mavery@ida.org 马修·艾弗里,一名在国防分析研究院(IDA)工作的研究人员。获得了北卡罗莱纳州立大学的统计学硕士学位和博士学位。艾弗里博士在IDA工作主要集中在作战测试和评估统计方面。

迈克尔·C.威尔斯国防分析研究院工作评价司4850马克中心传动 亚历山大,VA22310 USA 电子邮件:mwells@ida.org 迈克尔C.威尔斯,一名在国防分析研究院(IDA)工作的研究人员,获得了运筹学的硕士学位。来自加州大学伯克利分校,获得了Marymount大学信息管理硕士学位和美国军事学院的学士学位。威尔斯先生加入IDA的24年来一直为美国军队服务。威尔斯先生在IDA的工作重点是陆军火力运行测试。

《英语论文翻译.docx》
将本文的Word文档下载,方便收藏和打印
推荐度:
英语论文翻译
点击下载文档
相关专题 英语论文及翻译 英语论文 英语论文及翻译 英语论文
[其他范文]相关推荐
    [其他范文]热门文章
      下载全文