【正文】
hey tend to be resourceintensive and long running, repetitive processes. The process usually involves reading large amounts of data from a database, processing the data, and returning the results back to a database. This process is acplished through the execution of scripts. Types of batch jobs that anizations execute include: ? Financial management reports. ? Marketing reports. ? Supply chain management reports. ? Inventory reports. ? Invoice reports. ? Customer account processing (monthly account billing, and so on). ? Automated backups of system and application data. ? System processing summaries and capacity planning reports. 2 Introduction This guide provides detailed information about the Job Scheduling SMF for anizations that have deployed, or are considering deploying, Microsoft technologies in a data center or other type of enterprise puting environment. This is one of the more than 20 SMFs defined and described in Microsoft174。 this also facilitates error analysis at the local level. Figure 2. Simplified view of a batch architecture Management Server The heart of the batch architecture is the management server on which resides the batch scheduling tool. This tool permits the automatic execution of predetermined scheduled batch runs. Application Server Application Server Management Server Application Server Data Monitor Data CDB Printer 10 Job Scheduling The scheduling tool can typically perform the following functionality automatically: ? Start and stop jobs based on date, time, day of the week, frequency, and so on. ? Define, maintain, and manage job queues. ? Prioritize jobs in queue. ? Assign jobs to specific servers based on availability. ? Track status of jobs and allow realtime monitoring. ? Perform error recovery during batch runs. ? Report and log errors. ? Generate reports. ? Archive reports and purge outofdate reports and log files. ? Graphically display all information. ? Display job history. Note that the functionality of scheduling tools varies greatly. While some tools may be able to execute the tasks listed above, other tools may only be able to start a batch run at a particular time. Sophisticated scheduling tools should permit the capacity manager the ability to: ? Change schedules. ? Change jobs. ? Change job priorities. ? Start and stop jobs. ? Recover or restart failed jobs. ? Initiate asneeded jobs. ? Generate reports. The user interface of the scheduling tool should be easy for the capacity manager to interpret and use. The capacity manager should have the ability to perform the functional capabilities described above from a centralized location. From this location, the capacity manager should be able to access any information required to control batch processing and troubleshoot errors. Capacity Database General information about each batch job and metrics that are collected by the scheduling tool are typically stored in a batch (or application) log. Scripts should contain a job step (portion of the script code) that records job execution information in three different log files: the batch log file, the error log file, and the system event log file, which stores significant system events including batchprocessing errors and the successful or unsuccessful execution of batch jobs. The capacity database (CDB) is the central repository for all capacity and performancerelated information. Ideally, the batch and error logs should be part of or integrated with the CDB. The batch log contains general information about each batch job and information regarding system performance during the job execution. The error log records batchprocessing errors and system ponent warnings that occur during batch processing. Storing log information centrally facilitates retrieval and management of the information. Keep in mind that the CDB is not necessarily a single repository, but may be Service Management Function 11 a group of repositories that contain all capacity and performance information that is collected for the IT environment. General information that should be collected about each job before the job is placed into production includes: ? Job name ? Description of the job ? Identification number ? Owner and contact information ? Batch run affiliation ? Batch job steps ? Time and frequency of execution ? Execution window ? Start and end conditions ? Steps where recovery can occur (which job step) ? Batch job duration ? Special conditions when job should not run ? Relationships to and dependencies on other jobs ? Job priority ? Expected results ? Problem resolution procedures ? The application servers that are utilized ? The databases that will be accessed ? Reporting requirements Performance and errorrelated information that should be collected includes: ? The impact of the job on the batch architecture: ? Memory utilization ? CPU utilization ? Network utilization ? Disk utilization ? The job start and stop times ? The job duration ? What system ponents were actually used to process the job ? Processing errors ? System warnings when thresholds are exceeded Each application server that is involved in the batch run should record job processing results and performance metrics in a local log. It should then transfer the information to a central platform established to consolidate all logged information in a single database(s)—the batch log or CDB. Centrally storing batch/error log and descriptive job information allows easy access to and management of pertinent information by the capacity manager—information that is used to optimize system performance and analyze 12 Job Scheduling and correct errors. It also allows for easy backing up of important information. Operational reports are typically developed from information s