frodo_for_trandumper(R.R)

       
      name = frodo for trandumper(R.R)
      exam = 70-029

      Frodo for trandumper


      Questions: 

      1. You are the database developer for a leasing company. Your leasing 
      database includes a Lessee table that is defined as follows: 
      CREATE TABLE Lessee ( Id Int IDENTITY NOT NULL CONSTRAINT pk_lesse_id 
      PRIMARY KEY NONCLUSTERED, Surname varchar(50) NOT NULL, FirstName 
      varchar(50) NOT NULL, SocialSecurityNo char(9) NOT NULL, CreditRating 
      char(10) NULL, Creditlimit money NULL) 
      Each SocialSecurityNo must be unique. You want the data to be physically 
      stored in order by SocialSecurityNo. Which constraint should you add to 
      the SocialSecurityNo column on the Lessee table? 
      A. a UNIQUE CLUSTERED constraint 
      B. a UNIQUE UNCLUSTERED constraint 
      C. a PRIMARY KEY CLUSTERED constraint 
      D. a PRIMARY KEY UNCLUSTERED constraint 



      Answer: A 

      Gabio_final:
      PRIMARY KEY constraints create clustered indexes automatically if no 
      clustered index already exists on the table and a nonclustered index is 
      not specified when you create the PRIMARY KEY constraint. 
      In the table definition you can see PRIMARY KEY NONCLUSTERED, so you can 
      build clustered index with no problem.
      UNIQUE = Unique SocialSecurityNo.
      CLUSTERED = physically stored. 
      B: it is not physically stored in order
      C, D: No more than 1 PRIMARY KEY. 



      2. Your database includes a table that is defined as follows: 
      CREATE TABLE Orders ( OrderID Int IDENTITY(1,1) NOT NULL, RegionID Int NOT 

      NULL, SalesPersonID Int NOT NULL, OrderDate Datetime NOT NULL, OrderAmount 

      Int NOT NULL) 
      The sales manager wants to see a report that shows total sales by region 
      as well as a grand total of sale. Which query can you use to create the 
      report? 
      A. SELECT salespersonID, regionID, SUM(orderamount) FROM orders GROUP BY 
      salespersonID,RegionID 
      B. SELECT salespersonID, regionID, SUM(orderamount) FROM orders ORDER BY 
      regionID 
      COMPUTE SUM(orderamount) 
      C. SELECT salespersonID, regionID, orderamount FROM orders ORDER BY 
      RegionId 
      COMPUTE SUM(OrderAmount) BY regionID 
      COMPUTE SUM(OrderAmount) 



      Answer: C 
      Gabio_final:
      Need to use two COMPUTEs, as it needs total sales by region and a grand 
      total of sales. COMPUTE BY needs ORDER to match. 
      The COMPUTE and COMPUTE BY clauses are provided for backward 
      compatibility. Instead, use these components: 
       Microsoft OLAP Services in conjunction with OLE DB for OLAP Services 
      or ActiveX Data Objects Multidimensional (ADO MD). For more information, 
      see Microsoft SQL Server OLAP Services. 
       The ROLLUP operator.
      A COMPUTE BY clause allows you to see both detail and summary rows with 
      one SELECT statement. You can calculate summary values for subgroups, or a 

      summary value for the entire result set.
      A: doesnt contains grand total
      B: not contains sales by region


      3. You are building a new database for the Human Resources Department of a 

      company. There are 10 departments within the company and each department 
      contains multiple employees. In addition, each employee might work for 
      several departments. How should you logically model the relationship 
      between the department entity and the employee entity? 
      A. A mandatory one-to-many relationship between department to employee 
      B. A optional one-to-many relationship between department to employee 
      C. Create a new entry, create a one-to-many relation from the employee to 
      the new entry, and create a one-to-many relation from the department entry 

      to the new entry.
      D. Create a new entry, create a one-to-many relation from the new entry to 

      the employee entry and, then create a one-to-many relationship from the 
      entry to the department entry



      Answer: C
      Frodo: Made the C option more precise. Old C (which was actually D) was:
      Create a new entry, create a one-to-many relation from the employee to the 

      new entry, and create a many to one relationship from the entry to the 
      department entry



      Gabio_final:
      In a many-to-many relationship between two tables, one record in either 
      table can relate to many records in the other table. To establish a 
      many-to-many relationship, you need to create a third (junction) table and 

      add the primary key fields from each of the other two tables to this 
      table. A junction table contains the primary key columns of the two tables 

      you want to relate. You then create a relationship from the primary key 
      columns of each of those two tables to the matching foreign key columns in 

      the junction table.

      4.Your database includes a table named Sales. You monitor the disk I/O on 
      your Sales table, and you suspect that the table indexes are fragmented. 
      The Sales table has a clustered index named C_Sales on the primary key and 

      two non-clustered indexes named nc_sales1 and nc_sales2. You want to 
      rebuild the indexes on the Sales table by using a method that consumes the 

      fewest resources. How should you rebuild the indexes? 
      A. DBCC DBREINDEX (sales) 
      B. Create clustered index with drop-existing, create non-clustered index 
      with drop-existing 
      C. ALTER the clustered index, alter the non-clustered index with drop 
      existing 
      D. Three DROP INDEX statements, then three CREATE INDEX statements 




      Answer: A 
      Gabio_final:
      Only need to rebuild indexes. Need to use fewest resources. Use DBCC 
      DBREINDEX. Atomic and more optimization. If the index name is not 
      specified then all indexes for the table are rebuilt. DBCC DBREINDEX can 
      take advantage of more optimizations with DBCC DBREINDEX than it can with 
      individual DROP INDEX and CREATE INDEX statements.
      B: The DROP_EXISTING clause enhances performance when recreating a 
      clustered index (with either the same or a different set of keys) on a 
      table that also has nonclustered indexes. The DROP_EXISTING clause 
      replaces the execution of a DROP INDEX statement on the old clustered 
      index followed by the execution of a CREATE INDEX statement for the new 
      clustered index. The nonclustered indexes are rebuilt once, and only if 
      the keys are different. 
      If the keys do not change (the same index name and columns as the original 

      index are provided), the DROP_EXISTING clause does not sort the data 
      again. This can be useful if the index must be compacted. But because 
      after recreating clustered index, also recreate nonclustered index, they 
      are rebuild twice, so B is not desirable solution
      C: not a valid command
      D: The DROP INDEX statement does not apply to indexes created by defining 
      PRIMARY KEY or UNIQUE constraints (created by using the PRIMARY KEY or 
      UNIQUE options of either the CREATE TABLE or ALTER TABLE statements, 
      respectively), also consume more resource (non clustered index build 
      twice) 


      5.You have a table sales with one CLUSTERED INDEX on sales_id and 3 
      NON-CLUSTERED INDEX on Custumer_Id, Date, Amount. Users are complaining 
      because it takes to much time to insert or update data. You want to 
      rebuild the indexes. 
      A. DBCC DBREINDEX sales.dbo.sales_id DBCC DBREINDEX 
      sales.dbo.customer_id DBCC DBREINDEX sales.dbo.date DBCC DBREINDEX 
      sales.dbo.amount 
      B. DBCC DBREINDEX sales.dbo.sales_id DBCC DBREINDEX 
      sales.dbo.customer_id, sales.dbo.amount, sales.dbo.date 
      C. DROP INDEX on sales_id CREATE INDEX on sales_id
      D. DBCC DBREINDEX sales.dbo.sales_id 



      Answer: D 
      Gabio_final: 
      When a CLUSTERED INDEX is rebuild all NON-CLUSTERED INDEXES will be 
      rebuilt automatically. 
      DBCC DBREINDEX ([database.owner.table_name' [, index_name [, fillfactor] 
      ] ]) [WITH NO_INFOMSGS, 
      So DBCC DBREINDEX sales.dbo.sales_id will be incorrect, the separator must 

      be  ,. If this is the real text from exam then the answer D is not good, 

      and C is right one.
      A: more resource
      B: not valid command
      C: Normally, when a clustered index is dropped, every nonclustered index 
      has to be rebuilt to change its bookmarks to RIDs instead of the 
      clustering keys. Then, if a clustered index is built (or rebuilt), all the 

      nonclustered indexes must be rebuilt again to update the bookmarks.


      6.You are building a database for the human resource department of your 
      company. You want to eliminate duplicate entry and minimize data storage 
      wherever possible. You want to track the following information for each 
      employee: First name Middle name Last name Employee identification number, 

      Address Date of hire ,Department ,Salary, Name of manager. Which table 
      should you use? 
      A. First table: employeeID, ManagerID, firstname, middlename, lastname, 
      address, dateofhire, department, salary Second table: ManagerID, 
      firstname, middlename, lastname 
      B. First table: employeeID, firstname, middlename, lastname, address, 
      dateofhire, department, salary Second table: ManagerID, firstname, 
      middlename, lastname Third table: EmployeeID, ManagerID 
      C. Only one table: EmployeeID, ManagerID, firstname, middlename, 
      lastname, address, dateofhire, department, salary 
      D. Employee Table / Manager Table / ManagerEmployee Table 


      Answer: C 
      Gabio_final:
      One table with SELF JOIN. ManagerID may accept NULL so; the highest level 
      can enter NULL. 
      A: duplicate data, more data storage
      B:see A
      D: ???


      7.You need to create two new tables for your Purchasing database. The new 
      tables will be named PurchaseOrderHeader and PurchaseOrderLine. The 
      PurchaseOrderHeader table will have the PurchaseOrderHeaderID column as 
      the Primary Key. A PurchaseOrderLine row must not exist without a 
      corresponding PurchaseOrderHeader row. How can you create the table? 


      A. Something with CHECK Constraint
      B. Some other thing with CHECK Constraint
      C. Create the PurchaseOrderHeader table, and then create the 
      PurchaseOrderLine table that has a FOREIGN KEY constraint referencing the 
      PurchaseOrderHeader table.
      D. Create both tables, then create a FOREIGN KEY Constrain from 
      PurchaseOrderHeader referencing the PurchaseOrderLine table.


      Answer: C 
      A 1-M relationship. Use Primary Key and Foreign Key to create constraints. 

      Foreign Key can refer to a Primary or Unique key. If Unique Key is NULL, 
      then, the constraint is skipped. Or else, the value must exist or is NULL. 






      8.You need to produce a sales report listing all salesperson 
      identification 
      numbers, sales amounts, and order dates. You want the report sorted from 
      most recent sales to oldest sales for each day. You want the sales amounts 

      sorted from highest to lowest. You will be selecting this information from 

      a table that is defined as follows: 
      CREATE TABLE SalesInformation ( SalesInformationId Int IDENTITY (1,1) NOT 
      NULL PRIMARY KEY NONCLUSTERED, SalesPersonID Int NOT NULL, RegionID Int 
      NOT NULL, ReceiptID Int NOT NULL, SalesAmount Money NOT NULL, OrderDate 
      Datetime NOT NULL) 
      Which query will accurately produce the report? 
      A. SELECT salepersonID, saleamount, orderdate FROM salesinformation ORDER 
      BY orderdate DESC, saleamount DESC 
      B. SELECT salepersonID, saleamount, orderdate FROM salesinformation ORDER 
      BY orderdate, saleamount DESC 
      C. Others with various combinations of Max(salesamount), with DESC etc 
      D. Others with various combinations of Max(salesamount), without DESC etc 



      Answer: A 
      Gabio_final:
      There is no SUM involved. The list needs 2 DESC. 
      B: only amount is descendent, its need also date
      DESC: Specifies that the values in the specified column should be sorted 
      in descending order, from highest value to lowest value. 


      9.USE Sales DELETE FROM backorder FROM backorder bk INNER JOIN orders od 
      ON 
      bk.order_id = od.order_id WHERE CONVERT(CHAR(10), ship_date) = 
      CONVERT(CHAR(10), GETDATE()) 
      What will this statement delete? 
      A. All records from BackOrders table entered today 
      B. Records from Orders table for orders that are backordered 
      C. Records from BackOrder table for orders that are shipped today 
      D. None bacause statement will cause a syntax error 



      Answer: C 

      Gabio_final:
      Records are deleted from BackOrder table. The WHERE clause = today's 
      shipping date. So, it deleted the back orders that are shipped today. To 
      delete rows from a table based on rows in another table, specify an 
      additional FROM clause or specify conditions in the WHERE clause. 
      Its not the best answer because select CONVERT (CHAR (10), GETDATE ()) 
      return for example Sep 8 200, so the date will be truncate. But is only 
      possible answer.

      Disagreement on the following question!!! 

      10.You work for a telemarketing company. The company's telemarketing 
      application is a purchased application that allows minimal enhancements. 
      The application dials a telephone number and displays the last name of the 

      customer from the Customers table. The telemarketer wants to see the first 

      name of the customer as quickly as possible. You write the following 
      stored procedure: 
      CREATE PROCEDURE GetCustomerFirstName @Lastname Varchar (50) 
      AS 
      SELECT LastName, FirstName FROM Customers 
      WHERE LastName = @LastName 
      You create a NONCLUSTERED index on the LastName column. You set the 
      SHOWPLAN_TEXT option to ON and execute the stored procedure. The output is 

      as follow: 
      I--Bookmark Lookup (BOOKMARK:([Bmkl000]), 
      OBJECT:([Bank].[dbo].[Customer])) 
      IIndex Seek(OBJECT: ([Bank].[dbo].[Customer].[LastNameLDX1]), 
      SEEK:([Customer].[Lastname]=[@LastName]) ORDERED).
      What can you do to make the query more efficient? 
      A. Make LastName the PRIMARY KEY of the Customer table 
      B. Add FirstName to the NONCLUSTERED index on the Customers table 
      C. SELECT Lastname, Firstname FROM Customer(index=0) WHERE lastername 
      =@lastname 
      D. SELECT firstname, lastname FROM Customer WHERE lastname=@lastname 


      Answer: 
      Gabio_final: A 
      Abdul, Florian, Frodo: B. 

      Florian: the best option, as many dumpers as written incomplete question. 
      So, go for B only
      Abdul (reason for not C):
      C: If a clustered index exists, INDEX (0) forces a clustered index scan 
      and INDEX (1) forces a clustered index scan or seek. If no clustered index 

      exists, INDEX (0) forces a table scan and INDEX (1) is interpreted as an 
      error.
      Gabio_final:
      Reason [for A]: The keywords INDEX SEEK reveals that it is trying to 
      perform a NON-CLUSTERED INDEX SEEK on the field LASTNAME, therefore, 
      Making the LASTNAME the primary key would change the Plan to CLUSTERED 
      INDEX SEEK which improves the performance as a CLUSTERED INDEX forces 
      physical data re-arrangement. The issue here is, do you think that 
      LASTNAME will contain unique records? In reality, this is not likely, 
      image this, how many of us share the same lastname?? But from the exam 
      point of view, I'm assuming that LASTNAME is unique. Option B may be 
      right, but this depends on the order the index is created, if u create a 
      non-clustered index with FirstName then LastName, a query which uses the 
      2nd column'LastName' will not yield any improvement! From the question, it 

      shows that the WHERE clause searches only LASTNAME, therefore, even if the 

      FIRSTNAME is added as a nonclustered index; making it a composite 
      non-clustered index, how will it improve the performance (based on the 
      query above)? I've also tested with Query Analyzer Option C and D, didnt 
      notice any differences in improvement. Comments? 
      An index can be useful to a query only if the criteria of the query match 
      the columns that are leftmost in the index key. For example, if an index 
      has a composite key of last_name,first_name, that index is useful for a 
      query such as WHERE last_name = 'Smith' or WHERE last_name = 'Smith' AND 
      first_name = 'John'. But it is not useful for a query such as WHERE 
      first_name = 'John'. Think of using the index like a phone book. You use a 

      phone book as an index on last name to find the corresponding phone 
      number. But the standard phone book is useless if you know only a person's 

      first name because the first name might be located on any page.


      11.You have two tables TA and TB. You want to create a query joining them, 

      but you want maximum performance. 
      How to improve the performance? This is a in-complete question. As in the 
      real exam it was like this- 
      There is a join between tableA and tableB by a clustered index. Now, you 
      are moving tables to a new database. You want to increase the performance 
      of queries made by join. 
      As it was seen earlier (before moving to the new database) that when both 
      the tables were placed on the different filegroups the performance as 
      increased. 
      A. Create two filegroups FGA and FGB on separate disk. Place TA and its 
      indexes on FGA, TB and its indexes on FGB 
      B. Create three filegroups FGA, FGB, FGC on three separate disks. Place TA 

      on FGA, TB on FGB, and index of two table on FGC 
      C. Create one RAID 5 disk and place TA, TB, and their index from both 
      tables on the RAID 5 disk 
      D. Something like this: RAID 10 (1 & 0) on 2 controllers, create TA, TB 
      and their index. 



      Answer: B 
      Florian:
      please check it may be A also because clustered index should be placed on 
      the same filegroup as to increase the performance
      Gabio_final:
      C cannot create a RAID 5 on one disk. Need to spread the filegroups. Logs 
      and indexes on another disks. - Place tables used in a join, into 
      different filegroups on different disks. - If a table has a nonclustered 
      index, put the table and the index on different filegroups. - Put the log 
      file on a different disk. 
      A table can be created on a specific filegroup rather than the default 
      filegroup. If the filegroup comprises multiple files spread across various 

      physical disks, each with its own disk controller, then queries for data 
      from the table will be spread across the disks, thereby improving 
      performance. The same effect can be accomplished by creating a single file 

      on a RAID (redundant array of independent disks) level 0, 1, or 5 device. 
      If is clustered index A will be the answer.


      12. You are creating a table named Recruit to track the status of 
      potential new employees. The SocialSecurityNo column must not allow null 
      value. However, a value for a potential employee's Social Security Number 
      is not always known at the time of initial entry. You want the database to 

      populate the SocialSecurityNo column with a value of "UNKNOWN" when a 
      recruiter enters the name of a new potential employee without a Social 
      Security Number. How can you accomplish this task? 
      A. Create a CHECK constraint 
      B. Create a rule and bind it to the column 
      C. Create a DEFAULT defination on the SocialSecurityNo column. 



      Answer: C 
      Gabio_final:
      When you load a row into a table with a DEFAULT definition for a column, 
      you implicitly tell Microsoft SQL Server to load a default value in the 
      column when you do not specify a value for the column.


      13.You are developing an application for a worldwide furniture wholesaler. 

      You need to create an inentory table on each of the databases located in 
      New York, Chicago, Paris, London, San Francisco, and Tokyo. In order to 
      accomodate a distributed environment, you must ensure that each row 
      entered into the inventory table is unique across all location. How can 
      you create the inventory table? 
      A. Use the identity function. At first location use IDENTITY(1,4), at 
      second location use IDENTIY(2,4), and so on.
      B. Use the identity function. At first location use IDENTItY(1,1), at 
      second location use IDENTITY(100000,1), and so on.
      C. CREATE TABLE inentory ( Id Uniqueidentifier NOT NULL DEFAULT NEWID(), 
      ItemName Varchar(100) NOT NULL, ItemDescription Varchar(255) NULL, 
      Quantity Int NOT NULL, EntryDate Datetime NOT NULL) 
      D. Use TIMESTAMP data type.



      Answer: C

      Florian: NEWID() is the idea  it is never the same, thanks to 
      Microcomputer Software bullsh*t 

      Gabio_final:
      To uniquely identify all the data, you need to use Uniqueidentifier with 
      NEWID (). 
      The uniqueidentifier data type stores 16-byte binary values that operate 
      as globally unique identification numbers (GUID). A GUID is a binary 
      number that is guaranteed to be unique; no other computer in the world 
      will generate a duplicate of that GUID value. The main use for a GUID is 
      for assigning an identifier that must be unique in a network that has many 

      computers at many sites.


      14.It is time for the annual sales awards at your company. The awards 
      include 
      certificates awarded for the five sales of the highest dollar amounts. You 

      need to produce a list of the five highest revenue transactions from the 
      Orders table in the Sales database. The Orders table is defined as follow: 


      CREATE TABLE Orders ( OrderID Int IDENTITY (1,1) NOT NULL, SalesPersonID 
      Int NOT NULL, RegionID Int NOT NULL, OrderDate Datetime NOT NULL, 
      OrderAmount Int NOT NULL) 
      Which statement will produce the report? 
      A. SELECT TOP 5 OrderAmount, SalesmanID FROM orders 
      B. SELECT TOP 5 OrderAmount, SalesmanID FROM orders ORDER BY OrderAmount 
      DESC 
      C. SELECT TOP 5 WITH TIES OrderAmount, SalesmanID From Orders ORDER BY 
      SalesmanID DESC 
      D. SELECT TOP 5 WITH TIES OrderAmount, SalesmanID From Orders ORDER BY 
      OrderAmount 



      Answer: B 
      Gabio_final: 
      Highest value: Need to sort it DESC. Need 5 only. So, use TOP 5.
      A: Only the first 5 records with no ranking. 
      C: Use wrong ORDER. 
      D: Incorrect ORDER as no DESC. 


      15.Your database includes a Supplier table that contains 20 rows, a 
      Products 
      table that contain 80 rows, an Orders table that contains 50,000 and an 
      OrderDetails table that contains 150,000 rows. There are clustered indexes 

      on the primary keys for each table, and the statistics are up to date. You 

      need to analyse the following query for performance optimization: 
      SELECT DISTINCT s CompanyName FROM Suppliers S JOIN Products p ON 
      s SupplierID = p SupplierID JOIN OrderDetails od ON p.ProductID = 
      od ProductID SET SHOWPLAN ON 
      Which graphical execution plan might be generated by SQL Server Query 
      Analyser? 


      Answer:d

      Abdul:

      Answer I pick D. The one with Product and OrderDetails on the right. My 
      logic they join first. :>

      Gabio_final:
      I tried in SQL analyzer:
      |--Hash Match(Aggregate, HASH:([s].[name]), 
      RESIDUAL:([s].[name]=[s].[name]))
      |--Hash Match(Inner Join, HASH:([p].[id])=([od].[id_prod]), 
      RESIDUAL:([p].[id]=[od].[id_prod]))
      |--Nested Loops(Inner Join)
      Clustered Index Scan(OBJECT:([pubs].[dbo].[product].[IX_product] AS [p]))
      Clustered Index Seek(OBJECT:([pubs].[dbo].[supplier].[IX_supplier] AS 
      [s]), SEEK:([s].[id]=[p].[id_sup]) ORDERED)
      |--Clustered Index Scan(OBJECT:([pubs].[dbo].[order details].[IX_order 
      details] AS [od]))
      Execution plan:
      Select | Hash match (Aggregate) | Hash Match( or Merge Join ) | Nested 
      Loop | Product
      | Order details | Supplier


      16.CREATE TABLE ( Product_ID IDENTITY PRIMARY KEY, Description 
      NONCLUSTERED, 
      product_type, ...) You often query description and product_type, how to 
      improve performance? 
      A. Enterprise Manager to create NONCLUSTERED INDEX on each referenced 
      column 
      B. Enterprise Manager to generate sp for product table 
      C. Use index tuning 
      D. Profiler to monitor performance statistics of query 



      Answer: C 
      Gabio_final:
      A Primary Key has CLUSTERED INDEX defined. Index Tuning will check and 
      determine the best way to build indexes. The Index Tuning Wizard allows 
      you to select and create an optimal set of indexes and statistics for a 
      Microsoft SQL Server database without requiring an expert understanding of 

      the structure of the database, the workload, or the internals of SQL 
      Server.

      17.You plan to create two SQL Server 7 databases. One will be used for 
      Online 
      Transaction Processing (OLTP), the other for Decision Support Services 
      (DSS). A client application is designed to capture production information 
      from the DSS database in real time. The application must report product 
      results, trends in a timely fashion, so you are charged with maximizing 
      the performance of the DSS database. What arrangement will ensure the 
      maximum performance for the DSS without significantly degrading the 
      performance of the OLTP database? 
      A. Create both databases on the same computer, and add additional indexes 
      to support both the oltp and dss databases. 
      B. Install the OLTP and DSS database each on its own server. Use 
      transactional replication making the OLTP database the publisher, and the 
      DSS database being both the distributor and subscriber. 
      C. Install the OLTP and DSS database each on its own server. Backup the 
      transaction log from OLTP database and restore it on the dss database 
      routinely. 
      D. Create both databases on the same computer. Create distributed 
      transactions on the OLTP server to keep the DSS server synchronized. 



      Answer: B 
      Gabio_final:
      Opinion : If you want DSS to have maximum performance, it will have it if 
      you routinely backup and restore the log. If you assign the DSS both the 
      distributor and subscriber, the performance will worsen. All the people go 

      with B, but when you take the exam, be sure you fail with the answers in 
      the dumps. You have to go beyond them, read BOL and see the real answer. 
      If you just search for :OLTP in BOL, and read it carefully, you will be 
      convinced of it. 
      BOL: Online backup. OLTP systems are often characterized by continuous 
      operations (twenty-four hours a day, seven days a week) for which downtime 

      is kept to an absolute minimum. Although Microsoft SQL Server can back 
      up a database while it is being used, schedule the backup process to occur 

      during times of low activity to minimize effects on users. DSS: Because 
      the users are not changing data, concurrency and atomicity issues are not 
      a concern; the data is changed only by periodic, bulk updates made during 
      off-hour, low-traffic times in the database. 
      Use Import Export DTS, and then you see that it is all right. 

      Disagreement on the following question!!! 

      18.You have two SQL servers supporting two separate applications on your 
      network. Each application uses stored procedures for all data 
      manipulation. You need to integrate parts of the two applications. The 
      changes are limit to a few stored procedures that need to make calls to 
      remote stored procedures. What should you do to implement calls to remote 
      stored procedure? 
      A. Add the remote server as linked server. Fully qualify the remote 
      procedure name. 
      B. Configure SQL by setting REMOTE PROCEDURE TRANSACTION to 1 



      Answer:A

      (Gabio_final, Frodo: A 
      Abdul, Florian: B) 
      Gabio_final:
      I can say 100 %, I tried in SQL analyzer. You can execute RPC only by 
      linked server (if execute many times) or OPENROWSET (if you execute only 
      some times).
      REMOTE PROCEDURE TRANSACTION: Specifies that when a local transaction is 
      active, executing a remote stored procedure start a Transact-SQL 
      distributed transaction managed by the Microsoft Distributed Transaction 
      Manager (MS DTC). I guess this is not sufficient condition to make call to 

      remote stored procedures and I will go with A
      OPENROWSET: Includes all connection information necessary to access remote 

      data from an OLE DB data source. This method is an alternative to 
      accessing tables in a linked server and is a one-time, ad hoc method of 
      connecting and accessing remote data using OLE DB. The OPENROWSET function 

      can be referenced in the FROM clause of a query as though it is a table 
      name. The OPENROWSET function can also be referenced as the target table 
      of an INSERT, UPDATE, or DELETE statement, subject to the capabilities of 
      the OLE DB provider. Although the query may return multiple result sets, 
      OPENROWSET returns only the first one.
      Remote stored procedures are a legacy feature of Microsoft SQL Server. 
      Their functionality in Transact-SQL is limited to executing a stored 
      procedure on a remote SQL Server installation. The distributed queries 
      introduced in SQL Server version 7.0 support this ability along with the 
      ability to access tables on linked, heterogeneous OLE DB data sources 
      directly from local Transact-SQL statements. Instead of using a remote 
      stored procedure call on SQL Server 7.0, use distributed queries and an 
      EXECUTE statement to execute a stored procedure on a remote server.

      Disagreement on the following question!!! 

      19.You increase the number of users of your customer service application 
      from 
      20 to 120. With the additional users, the response time when retrieving 
      and updating has slowed substantially. You examine all the queries and 
      indexes and discover that they are all fully optimized. The application 
      seems to run properly as long as the number of users is less than 50. What 

      can you do to solve the problem? 
      A. Ensure table hint are used in key queries 
      B. Increase lock_timeout 
      C. Free up dirty pages in memory with a short recovery interval 
      D. Ensure appliation is using optimistic locking strategy instead of a 
      pessimistic locking strategy. 



      Answer:D

      (Gabio_final: A 
      Abdul, Florian, Frodo: D)

      Abdul:
      By default, Pessimistic locking will lock while read and update. 
      Florian:
      BOL: Optimistic concurrency control works on the assumption that resource 
      conflicts between multiple users are unlikely (but not impossible), and 
      allows transactions to execute without locking any resources. Only when 
      attempting to change data are resources checked to determine if any 
      conflicts have occurred. If a conflict did occur, the application must 
      read the data and attempt the change again (for example, if a second 
      transaction updated a row the first transaction previously had read and 
      wanted to update based on the previously read value that now does not 
      exist). 
      Pessimistic concurrency control locks resources as they are required, for 
      the duration of a transaction. Unless deadlocks occur, a transaction is 
      assured of successful completion. 
      Gabio_final:
      A: The SQL Server query optimizer automatically makes the correct 
      determination. It is recommended that table-level locking hints be used to 

      change the default locking behavior only when absolutely necessary. 
      Disallowing a locking level can adversely affect concurrency.
      B: Specifies the number of milliseconds a statement waits for a lock to be 

      released
      C: Cached pages that have been modified since the last checkpoint. 
      D: In optimistic concurrency control, users do not lock data when they 
      read it. When an update is performed, the system checks to see if the data 

      was changed by another user after it was read. If the data was updated by 
      another user, an error is raised. Typically, the user receiving the error 
      rolls back the transaction and starts over. This is called optimistic 
      because it is mainly used in environments where there is low contention 
      for data, and where the cost of occasionally rolling back a transaction 
      outweighs the costs of locking data when it is read. 
      The optimistic technique supports higher concurrency in the normal case 
      where conflicts between updaters are rare. If your application needs to 
      ensure that a row cannot be changed after it is fetched, you can use the 
      UPDLOCK hint in your SELECT statement to achieve this effect.


      20.Your users report that your database application takes an excessive 
      amount 
      of time to complete an operation. Over time, with the addition of new rows 

      and changes to existing rows, the situation has worsened. You suspect that 

      the tables are not optimally indexed. You plan to use the SQL Server 
      Profiler Create Trace Wizard to find out the cause of the problem. What 
      should you use the Create Trace Wizard to do? (Choose two.) 
      A. Find the worst performing query 
      B. Identify scans of large tables 
      C. Identify the cause of a Deadlock 
      D. Trace T-SQL activity by application 
      E. Trace T-SQL activity by user 


      Answer: A, B 
      Florian:
      (100% tested) (some dumpers say it is A,D. there is question in the 
      transcender, which is talking about the general optimization but here it 
      is talking about index optimization. So go for A,B) 


      21.You have an application that makes four connections to the SQL servers 
      at 
      the same time. The connections are used to execute SELECT, INSERT, UPDATE, 

      and DELETE statements. Each connection is used to execute one of these 
      four statements. The application occasionally stops responding when a user 

      is trying to update or delete rows, and then the user must close the 
      application. The problem occurs when a user attempts to execute an UPDATE 
      or DELETE statement after submitting a SELECT statement that retrieves a 
      result set of more than 10,000 rows. What can you do to prevent the 
      problem? 
      A. On the select session, set deadlock priority low 
      B. On the update/delete session, set deadlock priority low
      C. On the connection for the SELECT statement, set the isolation level 
      to READ UNCOMMITTED
      D. Set query wait option to 50,0000 



      Answer: C 
      Gabio_final:
      Blocking happens when one connection from an application holds a lock and 
      a second connection requires a conflicting lock type. This forces the 
      second connection to wait, blocked on the first. One connection can block 
      another connection, regardless of whether they emanate from the same 
      application or separate applications on different client computers.

      A,B: Controls the way the session reacts when in a deadlock situation. 
      Deadlock situations arise when two processes have data locked, and each 
      process cannot release its lock until other processes have released 
      theirs.
      SET DEADLOCK_PRIORITY {LOW | NORMAL | @deadlock_var}
      LOW 
      Specifies that the current session is the preferred deadlock victim. The 
      deadlock victims transaction is automatically rolled back by Microsoft 
      SQL Server, and the deadlock error message 1204 is returned to the client 

      application. 
      NORMAL 
      Specifies that the session return to the default deadlock-handling method. 


      So, it doesnt solve the problem
      C: Use as Low an Isolation Level as Possible to avoid deadlocks. Its 
      equivalent to NOLOCK. Mention: Do not issue shared locks and do not honor 
      exclusive locks. When this option is in effect, it is possible to read an 
      uncommitted transaction or a set of pages that are rolled back in the 
      middle of a read. Dirty reads are possible. Only applies to the SELECT 
      statement.
      D: Use the query wait option to specify the time in seconds (from 0 
      through 2147483647) that a query waits for resources before timing out. If 

      the default value of -1 is used, then the time-out is calculated as 25 
      times of the estimated query cost. 
      Important A query may hold locks while it waits for memory. In rare 
      situations, it is possible for an undetectable deadlock to occur. 
      Decreasing the query wait time prevents such deadlocks from occurring. 
      Eventually, a waiting query will be terminated and its locks released. 
      However, increasing the maximum wait time may increase the amount of time 
      for the query to be terminated. Changes to this option are not 
      recommended.
      Use a query or lock time out to prevent a runaway query and avoid 
      distributed deadlocks


      22.How do you minimize deadlocking? 
      A. Set deadlock_priority to low 
      B. Use clustered & nonclustedered indexes 
      C. Use resources in same sequence in all transactions 



      Answer: C 
      Gabio_final:
      To help minimize deadlocks: 
      A,. Access objects in the same order.
      B,. Avoid user interaction in transactions. 
      C,. Keep transactions short and in one batch. 
      D,. Use as low an isolation level as possible. 
      E,. Use bound connections. 


      23.You issue an UPDATE statement and then run a SELECT query to verify 
      that 
      the update was accurate. You find out that the UPDATE statement was 
      executed properly. However, the next time you log on to the SQL Server 
      computer, it appears that your UPDATE statement was not executed. What is 
      the most likely problem? 
      A. The ???? Option is set to ON.
      B. The SHOWPLAN_ALL Option is set to ON.
      C. The IMPLICIT_TRANSACTIONS Option is set to ON. 
      D. The ??FLTR?? Option is set to ON.


      Answer: C 
      Gabio_final:
      Transactions that are automatically opened as the result of this setting 
      being ON must be explicitly committed or rolled back by the user at the 
      end of the transaction. Otherwise, the transaction and all the data 
      changes it contains are rolled back when the user disconnects. After a 
      transaction is committed, executing one of the statements above starts a 
      new transaction.


      24.You want to increase product prices in database in the following 
      manner: 
      Up to $50 increase by 5%, between 50 & 100 increase by 10%, over $100 
      increase by 15%. How can you accomplish this and minimize performance hit 
      on the databse? 
      a.
      Update Inventory
      Set Price = Price *1.05
      WHERE Inventory <= 50
      Update Inventory
      Set Price = Price 1.10
      WHERE Inventory Between 50 And 100
      Update Inventory
      Set Price = Price 1.15
      WHERE Inventory >100
      b.
      Update Inventory = Case 
      When Price < 50 Then Price 1.05 
      When Price Between 50 And 100 Then Price 1.10 
      Else Price * 1.15 END
      C. TSQL using cursors 
      D. TSQL using cursors 


      Answer: B
      Abdul:
      The answer is a long SQL with CASE and WHEN. 
      Frodo: A. would give the same result, but takes more resources, since you 
      need to scan/seek the table 3 times. Cursors are even slower.

      25.You are testing a marketing application that gathers demographic 
      information about each household in the country. The demographic table in 
      the application contains more that 1,000 columns. Users report problems 
      regarding the performance of the application. As part of your 
      investigation into these reports, you are reviewing how the logical data 
      model was implemented. Most of the reports relate too long reponse times 
      when users are updating or retrieving data from the demographic table. 
      Nearly 90 percent of the users search or update 20 of the columns. The 
      remaining columns are seldom used, but they are important. What should you 

      do to improve data retrieval and update the performance when accessing the 

      information in the demographic table? 
      A. Create clustered index on the Demographic over the most accessed 
      columns 
      B. Create a VIEW based on Demographic table which selects the 20 most 
      accessed columns 
      C. Divide the data in Demographic table into two new tables with one table 

      containing the 20 most accessed columns and the other containing the 
      remaining columns 
      D. Create a series of stored Procedures that select or update the 
      Demographic table according to user needs 



      Answer: C 
      Florian:
      BOL:Vertical Partitioning 
      Vertical partitioning segments a table into multiple tables containing 
      fewer columns. The two types of vertical partitioning are normalization 
      and row splitting. Normalization is the standard database process of 
      removing redundant columns from a table and placing them in secondary 
      tables linked to the primary table by primary key and foreign key 
      relationships.Row splitting divides the original table vertically into 
      tables with fewer columns. Each logical row in a split table matches the 
      same logical row in the others. For example, joining the tenth row from 
      each split table re-creates the original row.Like horizontal partitioning, 

      vertical partitioning allows queries to scan less data, hence increasing 
      query performance. For example, a table containing seven columns, of which 

      only the first four are commonly referenced, may benefit from splitting 
      the last three columns into a separate table.Vertical partitioning should 
      be considered carefully because analyzing data from multiple partitions 
      requires queries joining the tables, possibly affecting performance if 
      partitions are very large. 


      26.You have nice database. Updated statistics, appropriate indexes, etc. 
      User 
      report that select queries are OK but modification runs slowly. Why is 
      that? 
      A. The transaction log is on a heavily used physical drive. 

      Answer: A 
      Abdul: As updating records will add logs


      27.You have a database that is accessed by many different applications 
      written with different development tools. Many applications were 
      originally written to work against any relational database system that 
      complies with ANSI 92. Each application uses a different mechanism for 
      accessing the database. ODBC, DB-Library, and ADO are all used 
      simultaneously. Transaction processing is not uniform across all your 
      application. How can you ensure that all applications handle transactions 
      in the same fashion? 
      A. Program all applicaiton to issue the SET ANSI_DEFAULTS ON command 
      immediately after establishing a user connection. 



      Answer: A
      Use SET ANSI_DEFAULTS ON. 


      28.You are the database administrator for your company. You receive 
      reports 
      that your Sales application has very poor response time. The database 
      includes a table that is defined as follows: 
      CREATE TABLE dbo.Orders ( OrderID Int IDENTITY(1,1) NOT NULL, 
      SalesPersonID Int NOT NULL, RegionID Int NOT NULL, OrderDate Datetime NOT 
      NULL, OrderAmount Int NOT NULL, CustomerID Int NULL) 
      The OrderID column is the primary key of the table. There are also indexes 

      on the RegionID and OrderAmount columns. You decide to run a showplan on 
      all queries in the application. The following query, which access this 
      table, is used to list total average sales by region: 
      SELECT t1.RegionID, AVG(t1.SalesTotal) AS RegionAverage 
      FROM (SELECT RegionID, SalesPersonID, SUM(OrderAmount) AS SalesTotal 
      FROM Orders GROUP BY RegionID, SalesPersonID) AS t1 GROUP BY t1.RegionID 
      You set the SHOWPLAN_TEXT option to ON and exeute the query. The showplan 
      output is as follwos: 
      I--Compute Scalar(DEFINE:([Exprl003]=If[Exprl006]=0) then NULL else 
      ([Exprl007]/[Exprl006]))) I--Stream Aggregate(GROUP 
      BY:([Orders].[RegionID]) 
      DEFINE:([Exprl006]=COUNT([Exprl002]),[Exprl007]=SUM([Exprl002]))) 
      I--Compute Scalar(DEFINE:([Exprl002]=If[Exprl004]=0) then NULL else 
      ([Exprl005]))) I--Stream Aggregate(GROUP BY:([Orders].[RegionID], 
      [Orders].[SalesPersonID])DEFINE:([Exprl004]=COUNT(*), 
      [Exprl005]=SUM([Orders].[OrderAmount]))) I--Sort(ORDER 
      BY:([Orders].[RegionID] ASC, [Orders].[SalesPersonID] ASC)) I--Table 
      Scan(OBJECT:([ServerA].[dbo].[Orders]) 
      You suspect that this query is part of the problem because the showplan 
      indicates that the query is performing a table scan operation. What is the 

      most likely reason that this query is performing a table scan? 



      A. Because the SELECT has no WHERE clause.
      B. Because the Order table has no PRIMARY KEY.
      C. Because the Order table has no CLUSTERED INDEX.
      D. ??

      Answer: A
      Because the SELECT has no WHERE clause. 


      Florian: True. If anyone can decrypt this SHOWPLAN_TEXT message. 
      Gabio_final: .
      Indexes assist when a query: 
      Searches for rows that match a specific search key value (an exact match 
      query). An exact match comparison is one in which the query uses the WHERE 

      statement to specify a column entry with a given value. For example: 
      WHERE emp_id = 'VPA30890F' 
      Searches for rows with search key values in a range of values (a range 
      query). A range query is one in which the query specifies any entry whose 
      value is between two values. For example: 
      WHERE job_lvl BETWEEN 9 and 12 
      or, 
      WHERE job_lvl >= 9 and job_lvl <= 12 



      29.Your database application includes a complex stored procedure that 
      display 
      status information as it processes transactions. You obtain unexpected 
      results in the status information when you run the stored procedure with 
      certain input parameters. You want to use SQL Server Profiler to help find 

      the problem in your stored procedure. Which four events should you track? 
      (Choose four) 



      Answer: 
      Abdul, Florian:
      SQLTransaction 
      SP:StmtCompleted 
      SQL:StmtStarting 
      Object:Opened 
      Gabio_final:
      SQL:BatchCompleted, SQL:BatchStarting, SP: StmtStarted, SP: StmtCompleted 


      29.You suspect that a stored procedure modifies a table randomly. You want 
      to 
      use SQL Server Profiler to help find the problem in your stored procedure. 

      Which four events should you track? 
      (Choose four) 

      Answer: All- with- SP: -and -one -with -SQL: 
      prefixes, do not choose one without 
      prefix. 
      Gabio_final adds: SQL:BatchCompleted, SQL:BatchStarting, SP: StmtStarted, 
      SP: StmtCompleted 
      XXX-Man:
      Oh and let me just say a big f*ck YOU to Gill Bates for even thinking 
      everybody is gonna be familiar will all of this sh*t! Bite me.

      XXX-man -- Had this b*tch! BILL YOU F U C K ! ! ! This question SUCKS! Be 
      sure you take a look at this crap in Profiler. Every one of mine has some 
      sort of prefix, but you can 'only' choose 4. So what is this stuff about 
      starting and stopping in the answers?? Any ideas??




      30.Multinational company with offices in NY, London, Nairobi and Cairo 
      tracks 
      sales. Sales are in local currency. Nairobi and Cairo have value added 
      tax. HQs want reports in $$$ at the end of the month. They have monthly 
      replication set up. Currency rates fluctuations are the reason that the 
      total of a sale in $$$ for a particular date is not the same in the end of 

      the month. You have a tables: 
      ConversionRate: (PK)Currency, (PK)Ratevaliddate, Rate, Check to reject 
      later date than current date 
      Sales: (PK)SalesID with uniqueidentifier, default Newid(), SalesAmount, 
      TaxAmount, SalesDate 
      Check all that apply. 
      A. Can you report sales for today in US$ 
      B. Can you report sales in US$$$ at the end of the month 
      C. Can you generate a report for the whole month 
      D. Can you see tax information 
      E. Can you tell the difference at the end of the month 
      F. Can you compare sales in US$ for any two days 



      Answer: C, D 
      Gabio_final:
      A: No reference from Sales table to ConversionRate table therefore cannot 
      convert to US.
      B: see A
      C: A report of all sales can be generated using SalesDate
      D:. You can see tax information using TaxAmount
      E: see A
      F: see A

      Disagreement on the following question!!!

      31.You are designing a distributed data model for an international 
      importing 
      and exporting company. The company has sales offices in London, Nairobi, 
      and Cairo, with headquarter in New York. Sales are recorded in the local 
      currency in the local database, but the United States dollars is used as a 

      common currency. London and Nairobi have a value-added tax, but Cairo does 

      not. The company's New York headquarter credit a sale to the local office 
      in US$ at the time of the sale, but reports its monthly sales in US$ 
      converted at the end of the month. Because of fluctuations in the exchange 

      rate, there can be differences in the US$ value of a sale between the date 

      of the sale and the date of the monthly report. You want to accomplish the 

      following goals: 
      - Ensure that all sales record primary keys are unique throughout the 
      distributed database. 
      - Ensure that a sales record created in London or Nairobi includes a 
      value-added tax, but that a sales record created in Cairo does not. 
      - Ensure that the local sales amounts can be calculated in US$ at the time 

      of the sale. 
      - Ensure that the local sales amounts can be calculated in US$ at the end 
      of the month. 
      - Ensure that the difference between the value of a sale in US$ at the 
      time of the sale and the value at the end of the month can be calculated. 
      You take the following actions: 
      - Create a ConversionRate table with a currency column, a Rate column, and 

      a RateValidDate column. 
      - Add PRIMARY KEY constraint to the ConversionRate table on the Currency 
      column and the RateValidDate column. 
      - Add a CHECK constraint to the ConversionRate table rejecting a 
      RateValidDate date that is later than the current date. 
      - Create a Sales table with a uniqueidentifier column named SalesID, a 
      SalesAmount column, a nonnullable TaxAmount column and a SalesDate column. 


      - Add a DEFAULT constraint to the Sales table on the SalesID column that 
      uses the NEWID function. 
      - Add a PRIMARY KEY constraint to the Sales table on the SalesID column. 
      Which result or results do these actions produce? (Choose all that apply) 
      A. Ensure that all sales record primary keys are unique throughout the 
      distributed database. 
      B. Ensure that a sales record created in London or Nairobi includes a 
      value-added tax, but that a sales record created in Cairo does not. 
      C. Ensure that the local sales amounts can be calculated in US$ at the 
      time of the sale. 
      D. Ensure that the local sales amounts can be calculated in US$ at the end 

      of the month. 
      E. Ensure that the difference between the value of a sale in US$ at the 
      time of the sale and the value at the end of the month can be calculated. 



      Answer: 
      Gabio_final, Frodo: A, C, D, E
      Abdul, Florian: A, C, D 


      Gabio_final:
      A: The uniqueidentifier data type stores 16-byte binary values that 
      operate as globally unique identification numbers (GUID). A GUID is a 
      binary number that is guaranteed to be unique; no other computer in the 
      world will generate a duplicate of that GUID value. The main use for a 
      GUID is for assigning an identifier that must be unique in a network that 
      has many computers at many sites.
      B: because TaxAmount is nonnullable you must to insert a value


      31.Multinational company with offices in NY, London, Nairobi and Cairo 
      tracks 
      sales. Sales are in local currency. Nairobi and London have value added 
      tax. Cairo doesn't have Value added tax. HQs want reports in $$$ at the 
      end of the month. Currency rates fluctuations are the reason that the 
      total of a sale in $$$ for a particular date is not the same in the end of 

      the month. You have a table that stores rates for all dates. This table 
      has constrain that doesn't allow you enter effective date to be later then 

      today. You have sales table with only one field for: 
      SALESID defaults NEWID(), SALESAMOUNT NOT NULL and TAXAMOUNT NOT NULL 
      field Check all that apply. 
      A. Can you report sales for the date in $$$? 
      B. Can you report sales in $$$ at the end of the month? 
      C. Can you tell the difference at the end of the month? 
      D. Nairobi and London track tax, Cairo doesn't 
      Answer: B, C 



      32.Your database includes a job_cost table that typically holds 100,000 
      rows 
      but can grow or shrink by as much as 75,000 rows at a time. The job_cost 
      table is maintained by a batch job that runs at night. During the day, the 

      job_cost table is frequently joined to other tables by many different 
      queries. Your users report that their initial queries are very slow, but 
      then response time improves for subsequent queries. How should you improve 

      the response time of the initial queries? 
      A. Run sp_createstats as part of the nightly batch job 
      B. Run sp_updatestats as part of the nightly batch process 
      C. Set the auto update statistics Database option to be true 
      D. Can't remember but some weird Database option 



      Answer: B 
      Gabio_final:
      SQL Server keeps statistics about the distribution of the key values in 
      each index and uses these statistics to determine which index (es) to use 
      in query processing. Users can create statistics on nonindexed columns by 
      using the CREATE STATISTICS statement. Query optimization depends on the 
      accuracy of the distribution steps: 
       If there is significant change in the key values in the index, rerun 
      UPDATE STATISTICS on that index. 
       If a large amount of data in an indexed column has been added, 
      changed, or removed (that is, if the distribution of key values has 
      changed), or the table has been truncated using the TRUNCATE TABLE 
      statement and then repopulated, use UPDATE STATISTICS. 

      32. A table with 100,000 rows. Each night a scheduled job transfers data 
      in/out of it and reduces it to 30,000 rows. Each morning users complain 
      that from 8AM till 8:30AM their application is working slowly. How to 
      prevent this behavior?
      A. Run sp_createstats as part of the nightly batch job 
      B. Run sp_updatestats as part of the nightly batch process 
      C. Set the auto update statistics Database option to be true 
      D. Can't remember but some weird Database option 



      Answer: B
      Or use UPDATE STATISTICS as part of the job 


      33.You have a database with full-text index. A nightly job backs it up 
      then 
      restores it. Next you run a full-text query agains a table you're sure 
      contains a word. Incorrect result returned. What to do? 


      Answer: Issue -repopulate- full-text -catalog -as -part-of -the -job. 


      34.You need to create a 6GB online transaction processing (OLTP) database. 

      Your SQL Server computer has two disk controllers, and each controller has 

      four 6GB hard disk drives. Each hard disk drive is configured as a 
      seperate NTFS partition. Microsoft Windows NT, the Windows NT swap file, 
      and SQL Server are all installed on drive C. The remaining drives, which 
      are labeled as drive D through J, are empty. How should you create the 
      OLTP database? 
      A. 6GB on D log on E 
      B. 4 2Gb files on D-G disks, 3 log file H-J 
      C. Create the data portion of the database as six seperate files on drives 

      D through I, with one file on each drive. Create the transaction log as a 
      single file on drive J 
      D. something wrong 24 files on drives D to I 



      Answer: C 
      Gabio_final:
      Do not place the transaction log file(s) on the same physical disk with 
      the other files and filegroups. Using files and filegroups improves 
      database performance by allowing a database to be created across multiple 
      disks, multiple disk controllers, or RAID (redundant array of independent 
      disks) systems. For example, if your computer has four disks, you can 
      create a database that comprises three data files and one log file, with 
      one file on each disk. As data is accessed, four read/write heads can 
      simultaneously access the data in parallel, which speeds up database 
      operations. To maximize performance, create files or filegroups on as many 

      different available local physical disks as possible, and place objects 
      that compete heavily for space in different filegroups.
      Because SQL Server writes data to the transactional log sequentially, 
      performance is not improved by creating multiple transaction log files.


      35.You have a table with a clustered primary key. The table is used 
      frequently for both queries and data modification. As part of a review of 
      data storage and disk utilization, you run the DBCC SHOWCONTIG statement. 
      The statement provides the following output: 
      Pages Scanned 158 
      Extents Scanned 21 
      Extent Switches 20 
      Avg. pages per extent 7.5 
      Scan Density 95.24% [20:21] 
      Extent Scan Fragmentation 4.76% 
      Avg. Bytes Free per Page 408.4 
      Avg. Page Density (full) 94.95% 
      What does this output tell you about how the data is stored? (Choose all 
      that apply) 
      A. The table is not externally fragmented. 
      B. The table is not internally fragmented. 
      C. The number of Extent Switches is excessive. 
      D. The row size does not efficiently fit on a page. 
      E. The IAM page does not reflect the actual extent usage. 



      Answer: A, B 
      Gabio_final:
      A: External fragmentation can be evaluated by looking at the Scan Density 

      [Best Count: Actual Count] line on DBCC SHOWCONTIG output. This value 
      should be 100.00%.(Or a very close ratio)

      B: Internal fragmentation can be evaluated by looking at the Avg. Page 
      Density (full) line on DBCC SHOWCONTIG output. As a general rule, it 
      should be greater than 90%.
      C: Extent switches - Number of times the DBCC statement moved from one 
      extent to another while it traversed the pages of the table or index. 
      Should be lose to Extents scanned.
      D:Average page density (as a percentage). This value takes into account 
      row size, so it is a more accurate indication of how full your pages are. 
      The higher the percentage, the better.

      SHOWCONTIG statement traverses the page chain at the leaf level of the 
      specified index when index_id is specified. If only table_id is specified, 

      or if index_id is 0, the data pages of the specified table are scanned. 
      DBCC SHOWCONTIG determines whether the table is heavily fragmented. Table 
      fragmentation occurs through the process of data modifications (INSERT, 
      UPDATE, and DELETE statements) made against the table. Because these 
      modifications are not usually distributed equally among the rows of the 
      table, the fullness of each page can vary over time. For queries that 
      scan part or all of a table, this can cause additional page reads.
      When a table is heavily fragmented, reduce fragmentation and improve 
      read-ahead (parallel data scan) performance by dropping and re-creating a 
      clustered index (without using the SORTED_DATA option). Re-creating a 
      clustered index reorganizes the data, resulting in full data pages. The 
      level of fullness can be configured using the FILLFACTOR option.
      Scan Density 
      [Best Count: Actual Count]
      Best count is the ideal number of extent changes if everything is 
      contiguously linked. Actual count is the actual number of extent changes. 
      The number in scan density is 100 if everything is contiguous; if it is 
      less than 100, some fragmentation exists. Scan density is a percentage.
      Logical Scan Fragmentation
      Percentage of out-of-order pages returned from scanning the leaf pages of 
      an index. This number is not relevant to heaps and text indexes. An out of 

      order page is one for which the next page indicated in an IAM is a 
      different page than the page pointed to by the next page pointer in the 
      leaf page.

      Avg. Bytes free per page
      Average number of free bytes on the pages scanned. The higher the number, 
      the less full the pages are. Lower numbers are better. This number is also 

      affected by row size; a large row size can result in a higher number.
      Avg. Page density (full)
      Average page density (as a percentage). This value takes into account row 
      size, so it is a more accurate indication of how full your pages are. The 
      higher the percentage, the better.

      The following is a blurb about fragmentation, it will help with the next 
      few questions

      Internal fragmentation occurs when page density is low. Page density 
      refers to how full,
      or dense, a page is. Lower page density equates to more I/Os when 
      performing a SELECT
      statement. In SQL 7.0, a page is 8Kb. The maximum amount of data which can 

      be contained
      in a single row is 8060 bytes, not including text, ntext and image data. 
      Lets say that you
      have a SQL 7.0 table with a row size of 4040 bytes. Only one row will fit 
      on a page in this
      scenario. However, if you were able to reduce the row size to 4030 bytes, 
      then two rows will
      fit on one page. This would result in half the number of I/Os per SELECT 
      statement,
      making a much more efficient table design. Internal fragmentation can be 
      evaluated by
      looking at the Avg. Page Density (full) line on DBCC SHOWCONTIG output. 
      As a general
      rule, it should be greater than 90%.


      External fragmentation occurs when extents are not contiguous. Space is 
      allocated to tables
      and indexes in extends. An extent is 8 pages. So, in SQL 7.0, an extent is 

      64Kb. When
      extents are out of order on the disk, this will result in less than 
      optimal data access.
      Its basically the same philosophy as disk fragmentation. External 
      fragmentation can be
      evaluated by looking at the Scan Density [Best Count: Actual Count] line 
      on DBCC
      SHOWCONTIG output. This value should be 100.00%.

      How do you fix fragmentation? 

      As mentioned above, internal fragmentation can sometimes be fixed by 
      changing the row
      size. However, page splits can also cause internal fragmentation. Page 
      splits occurs on
      a table with a clustered index and there is room left on the page to store 

      the new row.
      SQL will split the page in half to make room on the page for the new row. 
      This condition
      can be corrected by dropping and recreating the clustered index, which 
      will reallocate
      the pages in an efficient manner. External fragmentation can also be 
      corrected by
      dropping and recreating the clustered index.
      When a table is heavily fragmented, reduce fragmentation and improve 
      read-ahead (parallel data scan) performance by dropping and re-creating a 
      clustered index (without using the SORTED_DATA option). Re-creating a 
      clustered index reorganizes the data, resulting in full data pages. The 
      level of fullness can be configured using the FILLFACTOR option.


      36.You have a table with a clustered primary key. The table is used 
      frequently for both queries and data modification. As part of a review of 
      data storage and disk utilization, you run the DBCC SHOWCONTIG statement. 
      The statement provides the following output: 
      Pages Scanned: 158 Extents scanned: 20 
      Extent Switches: 21 
      Scan Density: 20:21 (Best Count : ActualCount) 
      Extent scan Fragmentation: 4.83% 
      Avg bytes free per page: 284.7 
      Avg Page density(full): 94.83% 
      What does this output tell you about how the data is stored? (Choose all 
      that apply) 
      A. The Table is not internally fragmented 
      B. The table is not Externally fragmented 
      C. The Extent Switches are too high 
      D. The pages fit well on the page 



      Answer: A, B, D 
      AbduL
      A,. Internal fragmentation can be evaluated by looking at the "Avg. Page 
      Density (full)" line on DBCC SHOWCONTIG output. As a general rule, it 
      should be greater than 90%. 

      B, External fragmentation can be evaluated by looking at the "Scan Density 

      [Best Count: Actual Count] line on DBCC SHOWCONTIG output. This value 
      should be 100.00%.(Or a very close ratio) 

      C,. Extent switches - Number of times the DBCC statement moved from one 
      extent to another while it traversed the pages of the table or index. 
      Should be lose to Extents scanned. 

      D,.Average page density (as a percentage). This value takes into account 
      row size, so it is a more accurate indication of how full your pages are. 
      The higher the percentage, the better. 
      Gabio_final:
      There are 2 different version question in the exam. In a DSS system,94% is 

      well fit;but in a OLTP system,94% is too full. 


      37.In the last year, your users have inserted or updated over 200,000 rows 
      in 
      a table named Sales. The Sales table has a clustered primary key. Response 

      times were good, but users report that response times have become worse 
      when queries are running against the Sales table. You run the DBCC 
      SHOWCONTIG statement on the Sales table and receive the following output: 
      Pages Scanned: 1657 
      Extents scanned: 210 
      Extent Switches: 1528 
      Avg. Page per extent: 7.9 
      Scan density: 13.60% [208:1529] 
      Logical Scan fragmentation: 91.43 
      Extent scan fragmentation: 1.43 
      Avg bytes free per page: 2843.5 
      Avg Page density(full): 64.87 What should you do to improve the response 
      times for queries? 
      A. Update the statement on the sales table 
      B. Create addition statement on the sales table 
      C. Run DBCC checktable statement on sales table 
      D. Run DBREINDEX statement on sales table 



      Answer: D 
      Florian: as DBREINDEX is the Microsofts solution to indexes




      38.You have a table: 
      CREATE TABLE Employee( Employee_id int identity (1,1), Fname varchar (30), 

      Lname varchar (30), Phone int , Address varchar (30), Salary money) 
      The phone number is an int but you want to show it like (999)-999-9999. 
      Choose the correct SELECT query. 
      A. Select Employee_id , Fname, Lname, '(' + 
      substring(convert(char(3),phone),3,0) + ')-' + sub 
      string(convert(char(3),phone),3,5) + '-' + 
      'substring(convert(char(3),phone),3,6) from employee 
      B. Select Employee_id , Fname, Lname, '(' + 
      substring(convert(char(3),phone),0,3) + ')-' + sub 
      string(convert(char(3),phone),3,3) + '-' + 
      'substring(convert(char(3),phone),6,3) from employee 
      C. Select Employee_id , Fname, Lname, '(' + 
      substring(convert(char(3),phone),1,3) + ')-' + sub 
      string(convert(char(3),phone),4,3) + '-' + 
      'substring(convert(char(3),phone),7,4) from employee 
      D. Select Employee_id , Fname, Lname, '(' + 
      substring(convert(char(3),phone),3,1) + ')-' + 
      substring(convert(char(3),phone),3,4) + '-' + 
      'substring(convert(char(3),phone),3,7) from employee 



      Answer: C 


      39.Your Orders table is defined as follow: 
      CREATE TABLE Orders ( OrderID Int IDENTITY (1,1) NOT NULL, SalesPersonID 
      Int NOT NULL, RegionID Int NOT NULL, OrderDate Datetime NOT NULL, 
      OrderAmount Int NOT NULL) 
      The table is becoming too large to manage. You must delete all sales that 
      are more than three years old. Which query will accomplish the desired 
      result? 
      A. Delete from Orders Where OrderDate < DATEADD(YY,-3,GETDATE()) 
      B. Delete from Orders Where OrderDate < DATEADD(YY,3,GETDATE()) 
      C. Delete from Orders Where OrderDate < GETDATE(), -3 
      D. Delete from Orders Where OrderDate < GETDATE(), +3 



      Answer: A 

      Abdul:
      C & D are out due to syntax. .If you want to delete all the rows in a 
      table, TRUNCATE TABLE is faster than DELETE (BOL) Must also assume the < 
      is wrong here. should be >. 
      BOL: If a WHERE clause is not supplied, DELETE removes all the rows from 
      the table 
      Dateadd Returns a new datetime value based on adding an interval to the 
      specified date. 
      so today = 1/1/95 then DATEADD(YY,3,GETDATE()) = 1/1/98


      40.You must reconcile the Checking account for your company. You have a 
      CheckRegister table, an InvalidCheck table, and a ClearedCheck table. The 
      ClearedCheck table lists checks that have cleared the bank. You must 
      update the ClearedDate column of the CheckRegister table for any checks 
      that are on the ClearedCheck table. If the check is in the ClearedCheck 
      table but not on the CheckRegister table, you must insert a row into the 
      InvalidCheck table. If the amount shown for a check in the ClearedCheck 
      table is different from the amount shown for the same check in the 
      CheckRegister table, you must insert a row into the InvalidCheck table. 
      Each row must be deleted from the ClearedCheck table after it has been 
      evaluated for accuracy. 
      Which statement group should you use to accomplish this task in the 
      shortest time? 
      A. DECLARE @CheckNumber int, @CheckAmount money, @ClearedDate datetime 
      DECLARE CheakRegisterCursor Cursor SCROLL FOR SELECT CheckNumber, 
      CheckAmount, ClearedDate FROM ClearedCheck FOR UPDATE 
      OPEN CheckRegisterCursor FETCH NEXT FROM CheckRegisterCursor INTO 
      @CheckNumber, @CheckAmount, @ClearedDate 
      WHILE @@Fetch_Status = 0 
      BEGIN 
      IF EXISTS (SELECT FROM CheckRegister WHERE CheckNumber = @CheckNumber 
      and CheckAmount = @CheckAmount) 
      BEGIN 
      UPDATE CheckRegister SET ClearedDate = @ClearedDate 
      DELETE ClearedCheck WHERE Current of CheckRegisterCursor 
      END 
      ELSE 
      BEGIN 
      INSERT InvalidCheck (CheckNumber,CheckAmount, ClearedDate) VALUES 
      (@CheckNumber, @CheckAmount, @ClearedDate.) 
      DELETE ClearedCheck WHERE Current of CheckRegisterCursor 
      END 
      FETCH NEXT FROM CheckRegisterCursor INTO @CheckNumber, @CheckAmount, 
      @ClearedDate 
      END 
      CLOSE CheckRegisterCursor
      DEALLOCATE CheckRegisterCursor 
      B. DECLARE @CheckNumber int, @CheckAmount money, @ClearedDate datetime 
      DECLARE CheckRegisterCursor Cursor LOCAL FORWARD_ONLY FOR SELECT 
      CheckNumber, CheckAmount, ClearedDate FROM ClearedCheck FOR UPDATE OPEN 
      CheckRegisterCursor FETCH NEXT FROM CheckRegisterCursor INTO @CheckNumber, 

      @CheckAmount, @ClearedDate WHILE @@Fetch_Status = 0 BEGIN IF EXISTS 
      (SELECT FROM CheckRegister WHERE CheckNumber=@CheckNumber and 
      CheckAmount = @CheckAmount) UPDATE CheckRegister SET ClearedDate = 
      @ClearedDate ELSE INSERT InvalidCheck (CheckNumber, CheckAmount, 
      ClearedDate ) VALUES (@CheckNumber, @CheckAmount,@ClearedDate) DELETE 
      ClearedCheck WHERE Current of CheckRegisterCursor FETCH NEXT FROM 
      CheckRegisterCursor INTO @CheckNumber, @CheckAmount, @ClearedDate END 
      CLOSE CheckRegisterCursor DEALLOCATE CheckRegisterCursor 

      C. UPDATE CheckRegister 
      SET CheckRegister.ClearedDate = ClearedCheck.ClearedDate 
      FROM CheckRegister JOIN ClearedCheck ON CheckRegister.CheckNumber = 
      clearedCheck.CheckNumber AND 
      CheckRegister.CheckAmount=ClearedCheck.CheckAmount 
      DELETE ClearedCheck FROM ClearedCheck JOIN CheckRegister ON 
      CheckRegister.CheckNumber = ClearedCheck.CheckNumber AND 
      CheckRegister.CheckAmount = ClearedCheck.CheckAmount 
      INSERT InvalidCheck (CheckNumber, CheckAmount, ClearedDate) SELECT 
      CheckNumber,CheckAmount, ClearedDate FROM ClearedCheck 
      DELETE ClearedCheck 
      D. DECLARE @ClearedDate datetime SELECT @ClearedDate = ClearedDate FROM 
      ClearedCheck IF EXISTS (SELECT FROM CheckRegister WHERE CheckNumber IN 
      (SELECT CheckNumber FROM ClearedCheck)) UPDATE CheckRegister SET 
      ClearedDate = @ClearedDate ELSE INSERT InvalidCheck SELECT FROM 
      ClearedCheck DELETE ClearedCheck



      Answer: C
      Gabio_final:
      A: It is no problem with this cursor, I saw other say the cursor not work, 

      but I dont see any reason . The problem with A is Update statement 
      because will update all rows from CheckRegister
      SCROLL 
      Specifies that all fetch options (FIRST, LAST, PRIOR, NEXT, RELATIVE, 
      ABSOLUTE) are available. If SCROLL is not specified in an SQL-92 DECLARE 
      CURSOR, NEXT is the only fetch option supported. SCROLL cannot be 
      specified if FAST_FORWARD is also specified. 
      B: See A, and it will not delete rows from ClearedCheck
      C: will work and is faster than cursors
      D: - Has some problems with the logic and cannot fulfill the requirement.


      41.A stored procedure is created and a Visual Basic user with SQL 
      authentication is trying to execute it but can't. Windows NT security is 
      not used. Choices given: 
      A. Give the user rights on the stored procedure 
      B. Make user part of NT group that has access to store procedure 
      C. Make user part of the SA role 
      D. Make user part of domain administrator 

      Answer: A 
      Gabio_final:
      B,: not work because it is only SQL authentication
      C,:will work, but it is not desirable solution
      D,: see B


      42.You are using a Data Transformation Services (DTS) package to import 
      data 
      from a dbase III table to your SQL server database. The DTS package 
      creates a destination table based on the source table then imports the 
      data into the newly created table. The source table contains order 
      information from your international sales department. You want the 
      destination table to have the following structure: 
      CREATE TABLE Order ( Order Int, CustNo Int, OrderDate Smalldatetime, 
      Foreign Bit) 
      GO 
      The import process fails every time you run the DTS package. You have 
      filled in all required portions of the DTS Import Wizard. What should you 
      do to eliminate the importation error? 
      A. Customize the transformation script to rename the Order column and the 
      Foreign column 
      B. DBaseIII > text > bcp 
      C. Write a DBaseIII program to import directly into SQL server 
      D. Change the source names 



      Answer: A 

      Florian:
      A. Order and Foreign cannot be used for they are keywords, so you have to 
      rename them, renaming can be in 
      form of script or in other ways. 
      B. all bcp does are included in DTS 
      C. The idea is to use SQL Server instead of dbaseIII 
      D. the source names have nothing to do with keywords 


      43.Want to export data to Pivot table in Excel spreadsheet from SQL. Data 
      changes frequently. Want to automate the process of updating the Excel 
      spreadsheet and use SQL Server Agent to schedule Job to automate. 
      A. Bulk copy script tabbed delimited textfile 
      B. Bulk copy script to populate the spreadsheet 
      C. Data Transformation Services to export package to tab delimited 
      textfile 
      D. Data Transformation Services to populate the spreadsheet 



      Answer: D 
      Florian:
      always remember that DTS does everything bcp does, despite this, bcopy 
      does not populate to a spreadsheet, and you dont neet to export to 
      textfile, so the only option is DTS, that creates a package, and can be 
      run automatically-continuously as a job to populate a spreadsh*t, ie. 
      Source SQL, destination:Excel 8.0 or like that, and save package and 
      advanced:run daily, monthly, etc.


      44.How do you add constraint to an existing table? 
      CREATE TABLE sample (Field1 int, Field2 int, Field3 int) 
      You want your Fields 1, 2 and 3 to be unique. 
      A. ALTER TABLE sample ADD CONSTRAINT fieldconst UNIQUE 
      NONCLUSTERED(field1, field2, field3) 
      B. ALTER TABLE sample ADD CONSTRAINT fieldconst1 UNIQUE NONCLUSTERED 
      field1 
      C. ALTER TABLE sample ADD CONSTRAINT fieldconst2 UNIQUE NONCLUSTERED 
      field2 
      D. ALTER TABLE sample ADD CONSTRAINT fieldconst3 UNIQUE NONCLUSTERED 
      field3 



      Answer: A 



      45.How do you add constraint to an existing table? 
      Create table sample(field1 int, field2 int, field3 int, bla bla bla)? 
      You want your field1-3 to be unique. 
      A. ALTER TABLE sample ADD CONSTRAINT fieldconst UNIQUE NONCLUSTERED 
      (field1, field2, field3) 
      B. ALTER TABLE sample ADD CONSTRAINT (fieldconst1) UNIQUE NONCLUSTERED 
      (field1) 
      ALTER TABLE sample ADD CONSTRAINT fieldconst2 UNIQUE NONCLUSTERED (field2) 


      ALTER TABLE sample ADD CONSTRAINT fieldconst3 UNIQUE NONCLUSTERED (field3) 


      C. ?
      D. ??

      Answer: B 
      Abdul: separate nonclustered
      Frodo: As TR pointed out, you need to have () around field1, field2 and 
      field3 in B (earlier dumps didnt have this). I verified this on my this. 
      The brackets were there.




      46.You are implementing a transaction-based application for a credit card 
      company. More that 10 million vendors accept the company's credit card, 
      and more than 100 million people regularly use the credit card. Before 
      someone can make a purchase by using the credit card, the vendor must 
      obtain a credit authorization from your transaction-based application. 
      Vendors around the world must be able to authorize purchase in less than 
      30 seconds, 24 hours a day, 7 days a week. Additionally, the application 
      must be able to accomodate more vendors in more locations in the future. 
      What should you do to implement the application to meet all of the 
      requirements? 
      A. Implement a Client/Server Architecture in which Vendors obtain an 
      authorized code from Centralized SQL SERVER that has enabled fall back 
      support. 
      B. Implement a Client/Server Architecture in which Vendors obtain an 
      authorized code from Centralized SQL SERVER that uses Microsoft Windows NT 

      Clustering service. 
      C. Implement an n-tier in which Vendors make calls to a single Microsoft 
      Transaction Server (MTS), which will obtain an authorization code from a 
      centralized SQL SERVER. 
      D. Implement an n-tier in which Vendors make calls to geographically 
      dispersed MTS application, which will obtain an authorization code from 
      geographically dispersed SQL SERVER. 



      Answer: D 
      Gabio_final:
      Its a general knowledge! If only 1 MTS in HQ office, your credit card 
      system will very slow. MTS is actually referred to in the BOL as 
      Distributed transactions I also gather that it is a service installed on 

      the server. 
      If this is the case then D would be the correct ans. BOL: Distributed 
      transactions span two or more servers known as resource managers. 
      The management of the transaction must be coordinated between the resource 

      managers by a server component called a transaction manager Like One 
      server could handle all of those transactions, get outta town. A & B are 
      out! 


      47.A credit card company servs many customers all over the world, you are 
      developing an app to authorize the use of credit cards? What do you need? 
      A. one SQL server 
      B. one SQL server, one MTS 
      C. n SQL server, one MTS 
      D. n SQL server, n MTS 
      Answer: D 


      48.You have a database that is used for storing the text of speeches given 
      by 
      certain government officials. The text of each speech is stored in the 
      Speeches table that is defined as follows: CREATE TABLE Speech (SpeechID 
      Char(32) Speechtext Text AuthorID Char(32)) GO. A fulltext index exists 
      for all columns in the Speech table. You want to search for a speech that 
      includes the phrase "Fore Score and only". Which query should you use to 
      perform this search? 
      A. WHERE SpeechID like '%Fore Score and only%' 
      B. WHERE SpeechText Like '%Fore Score and only%'
      C. WHERE FREETEXT(SpeechID, 'Fore Score and only')
      D. WHERE FREETEXT(Speechtext, 'Fore score and only')

      Answer: D 
      Gabio_final:
      The MS SQL Server full-text search engine identifies important words and 
      phrases
      B: Like : Determines whether or not a given character string matches a 
      specified pattern. For example if you have AlFore 
      Frodo: Not B since Like doesnt work on Text, just on character string 
      data types.


      50.You are designing a table to track purchases for a fish canning 
      company. 
      All transactions occur on fishing vessels that are not required to handle 
      coins, and therefore all purchases must be rounded to the nearest whole 
      dollar amount. Purchases are made once a day, and fish must be purchased 
      in multiples of 100 for tuna, and multiples of 50 for salmon. The 
      Government limits the purchase of salmon to 150 daily. Each month, your 
      company must report to the Government the kind, quantity, and supplier for 

      each lot of tuna or salmon purchased. You want to accomplish the following 

      goals: 
      - Ensure that all purchase amounts are rounded to the nearest whole 
      dollar. 
      - Ensure that records for tuna purchased must be in multiple of 100, and 
      that records for salmon purchased must be in multiples of 50. 
      - Ensure that the daily total quantity of salmon purchased does not exceed 

      150,. 
      - Ensure that the required monthly Government report can be produced 
      reporting the kind, quantity, and supplier for each lot of tuna or salmon 
      purchased. 
      You take the following actions: 
      - Create the data model as shown in the exhibit. 
      Table1: Company CompanyID (PK) 
      Table2: Purchase CompanyID, PurchaseID (PK), PurchaseDate 
      Table3: PurchaseDetail PurchaseID (PK), LineNo (PK), FishName, Quantity, 
      Price, etc. 
      Table4: FishName FishName (PK) 
      - Add a CHECK constraint to the PurchaseDetailTable on the FishQuantity 
      column rejecting values not equal to 50 or 100. 
      - Add a trigger to the PurchaseDetail table rejecting values greater than 
      150 when FishName is salmon. 
      - Add a trigger to the PurchaseDetail table rounding the PurchaseAmount to 

      the nearest whole dollar amount. Which result or results do these actions 
      produce? (Choose all that apply) 
      A. All PurchaseAmount are rounded to the nearest whole dollar 
      B. Records for tuna purchased must be in multiples of 100, and records for 

      salmon purchased must be in multiples of 50 
      C. The daily total Quantity of salmon purchased cannot exceed 150 
      D. The monthly government report can be produced. Kind, Qty, Supplier for 
      each lot of tuna or salmon purchased. 



      Answer: A, D 
      Gabio_final:
      A,: trigger3 will do this
      B,:trigger 1 dont have condition on fish type, so you couldnt 
      differentiate salmon or Tuna 
      C,: trigger 2 will reject a single purchase not total quantity of the day
      D,:Ok

      51. (Same as 50. with one additional trigger): 
      You are the SQL Administrator for a fish canning company. Each day, fish 
      purchases are made. Tuna must be bought in lots of 100; Salmon in lots of 
      50,. Cannot exceed 150 Salmon in any given day due to government 
      restrictions. Every month, a report must be submitted to the government 
      listing kind, quantity and supplier of the purchased fish. The lots in the 

      report must be integer form. You design your model as follows: 
      Table1: Company CompanyID 
      Table2: Purchase CompanyID, PurchaseID, PurchaseDate 
      Table3: PurchaseDetail PurchaseID, FishName, Quantity, Price, etc. 
      Table4: FishName FishName 
      You then create the following triggers: 
      (A) A Trigger on inserts to PurchaseDetail that only allows purchases in 
      lots of 50 or 100. 
      (B) A Trigger on inserts to PurchaseDetail that does not accept purchases 
      of 150 salmon. 
      (C) A Trigger that return the value of lots in a integer form. 
      Does this implementation satisfy the objectives? (Choose all that apply) 
      A. Salmon can only be in lots of 150/day 
      B. Tuna is bought in lots of 100 and Salmon in lots of 50 
      C. The lots are stored as integer 
      D. The tuna and the salmon lots are not more then 150/day 



      Answer: C 
      Gabio_final:
      Trigger B: Reject more than 150 in one purchase. But, not per day total. 
      Trigger A: Not fish specific. 

      Disagreement on the following question!!! 

      52.You are implementing a logical data model. All of the tables in your 
      logical data model are normalized to at least third normal form. There are 

      no surrogate primary keys in any of the tables. Some tables relationships 
      involve up to eight levels of parent, child, grandchild and so forth. In 
      the model the primary key of each descendant table inherits the primary 
      key of all ancestor tables. You want to accomplish the following goals: 
      - Allow table at any level in the hierarchy to be joined to any other 
      table in the hierarchy. 
      - Ensure that tables are joined on a single column. 
      - Limit the index length of primary key to 10 bytes or less. 
      - Ensure that key column are always compared on a binary basis regardless 
      of which options where selected during the installation of SQL server. 
      You do the following: 
      - Implement the data model as is. 
      - Create indexes on all foreign keys. 
      Which result or results do these actions produce? (Choose all that apply) 
      A. Tables in data model hierarchy can be joined to any other table in the 
      hierarchy 
      B. Tables are joined on a single column 
      C. The index length of all primary keys is 10 bytes or less 
      D. Key column are always compared on a binary basis regardless of which 
      options selected during the installation of SQL server 



      Answer: 
      Abdul,Florian: A, B, D 
      Gabio_final: A, B
      Gabio_final:
      D: not work because you can install SQL server with Case sensitive option 
      Frodo:
      The word binary in option D is missing in the dumps of Abdul and 
      Gabio_final. It is present in Florian.


      53.You are designing a data model that will record standardized student 
      assessments for a school district. The school district wants assessments 
      to be completed online. The school district also wants each student's 
      responses and scores to be stored immediately in the database. Every year, 

      each student will complete a behavior assessment and an academic 
      assessment. The school district needs to prevent changes to assessment 
      responses after the assessment is complete, but students should be allowed 

      to change their responses during the course of the assessment. The school 
      district wants to require each student to answer all items on each 
      assessment. When a student indicates completion, the score for the entire 
      assessment must be computed and recorded. You design a student table and 
      an assessment table. 
      You want to accomplish the following goals: 
      - Ensure that there is no redundant or derived data. 
      - Ensure that an assessment response cannot be change after the assessment 

      is complete and the score is entered. 
      - Ensure that all assessment items have responses when the assessment is 
      completed and the score is entered. 
      - Ensure that an assessment score is computed and stored when the 
      assessment is completed and the score is entered. 
      You take the following steps: 
      - Create a table name BehaviorAssessment and a table named 
      AcademicAssessment. Each with a primary key consisting of AssessmentID. 
      Include a column in each table for the text of each assessment item. 
      - Create a StudentBehavior table with a foreign key referencing the 
      Student table, a foreign key referencing the BehaviorAssessment table and 
      a primary key consisting a StudentID and AssessmentID. Include a 
      nonnullable column in the StudentBehavior table for each assessment 
      response, as well as an AssessmentScore column. 
      - Create a StudentAcademic table with a foreign key referencing the 
      Student table a foreign key referncing the AcademicAssessment table, and a 

      primary key consisting of StudentID and AssessmentID. Include a 
      nonnullable column in the StudentAcademic table for each assessment 
      response, as well as an AssessmentScore column. 
      - Add an INSERT trigger on the StudentBehavior table computing a value for 

      the AssessmentScore column when a row is inserted. 
      - Add an INSERT trigger on the StudentAcademic table computing value for 
      the AssessmentScore column when a row is inserted. 
      - Add an INSERT trigger on the StudentBehavior table preventing changes in 

      the assessment response if the AssessmentScore column is not null. 
      - Add an INSERT trigger on the StudentAcademic table preventing changes in 

      the assessment response if the AssessmentScore column is not null. 
      Which result or results do these actions produce? (Choose all that apply) 
      A. There is no redundant or derived data 
      B. An assessment response cannot be changed after the assessment is 
      complete and the score in entered 
      C. All assessment items have a response when the assessment is complete 
      and the score in entered 
      D. An assessment score is computed and stored when the assessment is 
      complete and the score in entered 

      Answer: C, D 
      Florian: (remember 4 insert trigers) 
      A,: Score is derived. 
      B,: TRIGGER is on INSERT, not UPDATE. Need two UPDATE triggers to prevent 
      changes in 2 tables. 
      C,: Correct as non-nullable is allowed in Response column. 
      D,: Correct as first 2 Triggers do this. 


      54. You are designing a data model that will record standardized student 
      assessments for a school district. The school district wants the 
      assessments to be completed online. The school district also wants each 
      student's responses and scores to be stored immediately in the database. 
      Every year, each student will complete a behavior assessment and an 
      academic assessments. The school district needs to prevent changes to 
      assessments responses after the assessment is completed but student should 

      be allowed to change their responses during the course of the assessment. 
      The school district wants to request each student to all items on each 
      assessment. When a student indicate completion of the score for the entire 

      assessment must be computed and recorded. 
      You design a Student table and an Assessment table. 
      You want to accomplish the following goals: 
      - Ensure that there is no redundant or derived data. 
      - Ensure that an assessment response cannot be change after the assessment 

      is complete and the score is entered. 
      - Ensure that all assessment item have responses when the assessment is 
      completed and the score is entered. 
      - Ensure that an assessment score is computed and stored when the 
      assessment is completed and the score is entered. 
      You take the following steps: 
      - Create a subtype table name BehaviorAssessment and a subtype table named 

      AcademicAssessment. Each with a primary key consisting of AssessmentID. 
      Include a column in each table for the text of each assessment item. 
      - Create a StudentBehavior table with a foreign key referencing the 
      Student table, a foreign key referencing the BehaviorAssessment table and 
      a primary key consisting a StudentID and AssessmentID. Include a 
      nonnullable column in the StudentBehavior table for each assessment 
      response, as well as a AssessmentScore column. 
      - Create a StudentAcademic table with a foreign key referencing the 
      Student table a foreign key referncing the AcademicAssessment table, and a 

      primary key consisting of StudentID and AssessmentID. Include a 
      nonnullable column in the StudentAcademic table for each assessment 
      response, as well as an AssessmentScore column. 
      - Add an INSERT trigger on the StudentBehavior table computing a value for 

      the AssessmentScore column when a row is inserted. 
      - Add an INSERT trigger on the StudentAcademic table computing value for 
      the AssessmentScore column when a row is inserted. 
      Which result or results do these actions produce? (Choose all that apply) 
      A. There is no redundant or derived data. 
      B. An assessment response cannot be changed after the assessment is 
      completed and the score is entered. 
      C. An assessment item having response. When the assessment is computed and 

      the score is entered. 
      D. An assessment score is completed and stored when the assessment is 
      completed and the score is enter. 



      Answer: D 

      Gabio_final:
      A,: score is derived from computing
      B,: there are no triggers or other objects that prevents this
      C,: score is computed column it will not be illustrative for one item
      D,: correct


      54.You are designing a data model that will record standardized student 
      assessments for a school district. The school district wants assessments 
      to be completed online. The school district also wants each student's 
      responses and scores to be stored immediately in the database. Every year, 

      each student will complete a behavior assessment and an academic 
      assessment. The school district needs to prevent changes to assessment 
      responses after the assessment is complete, but students should be allowed 

      to change their responses during the course of the assessment. The school 
      district wants to require each student to answer all items on each 
      assessment. When a student indicates completion, the score for the entire 
      assessment must be computed and recorded. You design a student table and 
      an assessment table. 
      You want to accomplish the following goals: 
      - Ensure that there is no redundant or derived data. 
      - Ensure that an assessment response cannot be change after the assessment 

      is complete and the score is entered. 
      - Ensure that all assessment item have responses when the assessment is 
      completed and the score is entered. 
      - Ensure that an assessment score is computed and stored when the 
      assessment is completed and the score is entered . 
      You take the following steps: 
      - Create a subtype table named BehaviorAssessment and a subtype table 
      named AcademicAssessment. Each with a primary key consisting of 
      AssessmentID. Include a column in each table for the text of each 
      assessment item. 
      - Create a StudentBehavior table with a foreign key referencing the 
      Student table, a foreign key referencing the BehaviorAssessment table and 
      a primary key consisting a StudentID and AssessmentID. Include a 
      nonnullable column in the StudentBehavior table for each assessment 
      response, as well as an AssessmentScore column. 
      - Create a StudentAcademic table with a foreign key referencing the 
      Student table a foreign key referncing the AcademicAssessment table, and a 

      primary key consisting of StudentID and AssessmentID. Include a 
      nonnullable column in the StudentAcademic table for each assessment 
      response, as well as an AssessmentScore column. 
      - Add an INSERT trigger on the StudentBehavior table computing a value for 

      the AssessmentScore column when a row is inserted. 
      - Add an INSERT trigger on the StudentAcademic table computing value for 
      the AssessmentScore column when a row is inserted. 
      - Add an UPDATE trigger on the StudentBehavior table preventing changes in 

      the assessment response if the AssessmentScore column is not null. 
      - Add an UPDATE trigger on the StudentAcademic table preventing changes in 

      the assessment response if the AssessmentScore column is not null. 
      Which result or results do these actions produce? (Choose all that apply) 
      A. There is no redundant or derived data 
      B. An assessment response cannot be changed after the assessment is 
      complete and the score in entered 
      C. All assessment items have a response when the assessment is complete 
      and the score in entered 
      D. An assessment score is computed and stored when the assessment is 
      complete and the score in entered 



      Answer: B, C, D 
      Gabio_final:
      A,: score is computed column
      B,: trigger is for update so will do this


      55.A table with three columns. You want in composite index. Column A has 
      about 10% unique values, Column B about 50%, Column C about 90%. Which 
      index enables the fastest query? 
      A. CLUSTERED(A, B, C) 
      B. NON-CLUSTERED(A, B, C) 
      C. CLUSTERED(C, B, A) 
      D. NON-CLUSTERED(C, B, A) 
      E. CLUSTERED on C, NONCLUSTERED on A, B 



      Answer: C 

      Gabio_final:
      If you have to provide fastest query I guess C is answer, other situation 
d
      Most unique columns should be first in a composite index. A clustered 
      index can be a composite index
      Clustered indexes are not a good choice for: 
       Columns that undergo frequent changes because this results in the 
      entire row moving (because SQL Server must keep the rows data values in 
      physical order). This is an important consideration in high-volume 
      transaction processing systems where data tends to be volatile. 
       Covered queries. The more columns within the search key, the greater 
      the chance for the data in the indexed column to change, resulting in 
      additional I/O. 
      Consider using nonclustered indexes for: 
       Columns that contain a high number of distinct values, such as a 
      combination of last name and first name (if a clustered index is used for 
      other columns). If there are very few distinct values, such as only 1 and 
      0, no index should be created. 
       Queries that do not return large result sets. 
       Columns frequently involved in search conditions of a query (WHERE 
      clause) that return exact matches. 
       Decision Support System applications for which joins and grouping are 
      frequently required. Create multiple nonclustered indexes on columns 
      involved in join and grouping operations, and a clustered index on any 
      foreign key columns. 
       Covered queries. 


      56.The WHERE clause of query includes Search on Column A, Column B, Column 
      C. 
      Column A is nearly identical in all row. Column B is identical in about 
      50% of the row, Column C is indetical in about 10% of the row. How should 
      you create indexes? 
      A. Create composite clustered index on Column A, Column B, Column C. 
      B. Create composite clustered index on Column C, Column B, Column A. 
      C. Create composite nonclustered index on Column A, Column B, Column C. 
      D. Create composite nonclustered index on Column C, Column B, Column A. 
      E. Create composite clustered index on Column A, separated nonclustered 
      index on Column B & Column C. 
      F. Create separated nonclustered index on each column. 



      Answer: B or d
      X: my view if fastest query is given then choose clustered and if high 
      performance is given then choose non-clustered applicable for this 
      identical question only
      Abdul: If need performance, use B. Or else, use D.
      Frodo: Gabio_final accidentally makes a mistake in this question.


      57.You are investigating reported problems regarding the performance of a 
      query in your database. The WHERE clause of query includes Search 
      arguments on Column A, Column B and Column C. You analyse the data and 
      discover that the content of Column A is nearly identical in all rows. The 

      content of Column B is the same in about 50 percent of the rows. The 
      content of Column C is the same in about 10 percent of the rows. 
      How should you index the table to improve query performance. 
      A. Create composite clustered index on Column A, Column B, Column C 
      B. Create composite clustered index on Column C, Column B, Column A 
      C. Create composite nonclustered index on Column A, Column B, Column C 
      D. Create composite nonclustered index on Column C, Column B, Column A 
      E. Create composite clustered index on Column A,separate nonclustered 
      index on Column B & C 
      F. Create separate nonclustered index on each column 



      Answer: B or d
      Frodo: Gabio_final accidentally makes a mistake in this question.


      58.You have A, B, C columns with 100 %, 50 % and 10 % cardinality 
      A. Create Clustered Index A, B, C 
      B. Create nonclustered index A, B, C 
      C. Create A clustered B & C nonclustered 
      D. other choices which I don't remember 



      Answer: A 


      59.Your company has a HQ offices and 3 Regional offices (RG1, RG2, RG3). 
      Each 
      office have a employee tables with the following columns: emp_id, salary, 
      hire_date, department, s_number, jod_id 
      You want to write view that gives the following form of results: 
      Office emp_id department hire_date 
      ------------- --------------- ----------------- ----------------- 
      What of the following do you use to with the create view? 
      A. SELECT 'HQ' Office, emp_id, department, hire_date FROM HQ SELECT 'RG1' 
      Office, emp_id, department, hire_date FROM RG1 SELECT 'RG2' Office, 
      emp_id, department, hire_date FROM RG2 SELECT 'RG3' Office, emp_id, 
      department, hire_date FROM RG3 
      B. SELECT FROM HQ SELECT FROM RG1 SELECT FROM RG2 SELECT ' FROM RG3 

      C. SELECT 'HQ', emp_id, department, hire_date FROM HQ Union ALL SELECT 
      'RG1', emp_id, department, hire_date FROM RG1 Union ALL SELECT 'RG2', 
      emp_id, department, hire_date FROM RG2 Union ALL SELECT 'RG3', emp_id, 
      department, hire_date FROM RG3 (in real exam it is like this) 
      D. SELECT Office='HQ', emp_id, department, hire_date FROM HQ Union ALL 
      SELECT Office='RG1', emp_id, department, hire_date FROM RG1 UNION ALL 
      SELECT Office='RG2', emp_id, department, hire_date FROM RG2 Union ALL 
      SELECT Office='RG3', emp_id, department, hire_date FROM RG3 



      Answer: D 
      Frodo: Gabio_final wrongly claims that C (or B  he seems confused) is 
      equal to D. It is not.


      60.You have a database to keep track of sales information. Many of your 
      queries calculate the total sales amount for a particular salesperson. You 

      are working with a nested procedure that will pass an parameter back to 
      the calling procedure containing the total sales as follows: 
      CREATE PROCEDURE GetSalesPersonData @SalesPersonID Int, @RegionID Int, 
      @SalesAmount Money OUTPUT AS SELECT @SalesAmount = SUM(SalesAmount) FROM 
      SalesInformation WHERE @SalesPersonID = SalesPersonID 
      Which statement will accurately execute the procedure and receive the 
      required result? 


      Answer: EXECUTE- GetSalesPersonData- 1,- 1, -@SalesAmount -OUTPUT 
      Abdul:Need OUTPUT is both DECLARE and EXEC. 

      Disagreement in the following question!!! 

      61.A question asking how to implement cascade deleting? 
      A. Create a trigger and do not use referential integrity 
      B. Create a trigger and use referential integrity

      Answer:
      Florian, Abdul: A
      Gabio_final: B
      Gabio_final:
      BOL : Maintaining demoralized (sic! Frodo) data is different from 
      cascading because cascading typically refers to maintaining relationships 
      between primary and foreign keys
      Triggers are often used for enforcing business rules and data integrity. 
      SQL Server provides declarative referential integrity (DRI) through the 
      table creation statements (ALTER TABLE and CREATE TABLE); however, DRI 
      does not provide cross-database referential integrity. To enforce 
      referential integrity (rules about the relationships between the primary 
      and foreign keys of tables), use primary and foreign key constraints (the 
      PRIMARY KEY and FOREIGN KEY keywords of ALTER TABLE and CREATE TABLE). If 
      constraints exist on the trigger table, they are checked prior to trigger 
      execution. If either PRIMARY KEY or FOREIGN KEY constraints are violated, 
      the trigger is not executed (fired).
      Frodo: Question is incomplete. Need more information. I think A is most 
      plausible. 


      62.Your company's customers table has 1,000,000 records, and will be 
      increased by 20% EACH YEAR. The table is updated every night, and during 
      daytime some queries are performed on it. You want to increase the 
      performance of the queries. What fill factor will you set? 
      A. Use the default fill factor 
      B. 25% 
      C. 75% 
      D. 100% 



      Answer: C 
      Gabio_final:Hence a balance of UPDATE and SELECT, 75% is the best. 

      Disagreement in the following question!!! 

      63.You are recreating the customers database and you want to create the 
      index, The table has 1,000,000 records and is heavily updated. The table 
      is expected to increase by 20% over two years. Choices given: 
      A. Use the default fillfactor 
      B. 100% fillfactor 
      C. 75% fillfactor 
      D. Use a fillfactor 25% 
      E. Do nothing 



      Answer:D

      (Abdul, Gabio_final (and Frodo!): D
      Florian: C) 
      Gabio_final:
      A lower fillfactor option increases UPDATE and INSERT performance because 
      of reduced page splitting. A lower fillfactor value is suited to online 
      transaction processing (OLTP) environments.
      A higher fillfactor value increases query or read performance because the 
      rows can be read from fewer pages. A higher fillfactor is suited to 
      decision support services (DSS) environments.
      It is useful to set the fill factor option to another value only when a 
      new index is created on a table with existing data, and then only when 
      future changes in that data can be accurately predicted.
      Abdul:
      The acceptable range for FILLFACTOR is 0-100%.
      A lower fillfactor value increases UPDATE and INSERT performance... and is 

      suited to OLTP environments. 
      A higher fillfactor value increases query or read performance... and is 
      suited to DSS environments. 
      Therefore it is very important that you read the question on the exam, not 

      just the one from the dumps! The second dumped question does not have the 
      key phrase, which is in the first dump: 'heavily updated'. This is it! The 

      answer is therefore that this is a OLTP database, a lower value is needed, 

      and the answer is 25%. 


      64.Your server named Corporate has a Sales database that stores sales data 

      for a software distribution company. Two remote servers named New York and 

      Chicago each store sales data in a SalesOrder table relative only to their 

      respective sales territories. The SalesOrder table on the Corporate server 

      are updated once a week with data from the remote servers. The SalesOrder 
      table on each server, including Corporate, is defined as follows: 
      CREATE TABLE SalesOrders ( Number Char(10) NOT NULL, CustomerName 
      VarChar(100) NOT NULL, TerritoryName VarChar(50) NOT NULL, EntryDate 
      Datetime NOT NULL, Amount Money NOT NULL) 
      You need to create a view that shows a current list of all sales from the 
      New York and Chicago sales territories, and the list should have the 
      following format: 
      Territory Customer Date Amount 
      ----------------------------------------- 
      Which view can you create to show all sales from the New York and Chicago 
      sales territories in the required format? 



      Answer: 
      CREATE VIEW SalesSummaryView (Territory, Customer, Date, Amount) AS SELECT 

      TerritoryName, CustomerName, EntryDate, Amount FROM 
      NewYork.Sales.dbo.SalesOrder UNION ALL SELECT TerritoryName, CustomerName, 

      EntryDate, Amount FROM Chicago.Sales.dbo.SalesOrder 


      66.Department managers in your company want to use Microsoft Excel pivot 
      tables to analyse data from your SQL Server database. You need to extract 
      data from tables in the database to an Excel spreadsheet so that managers 
      can copy the spreadsheet and build pivot tables. The data in the database 
      change frequently, and you want to automate the process of updating the 
      Excel spreadsheet. You plan to use a SQL Server Agent scheduled job to 
      automate the extraction of the data to the spreadsheet. What should the 
      scheduled job execute? 
      A. Bulk copy script tabbed delimited textfile 
      B. Bulk copy script to populate the spreadsheet 
      C. Data Transformation Services (DTS) to export package to tab delimited 
      textfile 
      D. Data Transformation Services (DTS) to populate the spreadsheet 



      Answer: D 


      67.You have a database that contains information about publication for 
      sales. 
      You want to write a full-text search query that will search through all 
      columns in one table enabled for full-text querying. The table includes a 
      column named Titles, a column named Price, and a column named Notes. The 
      Titles and Notes columns are full-text enabled. You want to find all 
      publications that deal with French gourmet cooking. Which CONTAINS 
      statement should you use? 
      A. WHERE CONTAINS (*, '"French gourmet"') 
      B. WHERE CONTAINS (notes, '"French gourmet"') 
      C. WHERE CONTAINS (titles, '"French gourmet"') 
      D. WHERE CONTAINS (price, '"French gourmet"') 



      Answer: A 
      Gabio_final:the trick is that you must know CONTAINS applies to all 
      columns enabled for full text.



      69.You have a query that run frequently and had a lot of overhead. What 
      could 
      you do to increase performance? 

      Answer: Make -it -a- stored-procedure. 


      70.A tricky question about aggregate function with SUM in the select 
      statement. All choices had the WHERE clause, but only one used a subquery 
      (correct answer). You can only use the WHERE clause on a aggregate 
      function if it is in a subquery, if it is not in a subquery you need to 
      have the HAVING clause. 
      answer:

      71. (Frodo: looks like a less detailed version of 90.)
      Two tables  Processing and Accounting. Users enter info in processing and 

      info batch transferred to Acting once a day using (something like): 
      Begin Tran 
      Check Batch files for errors 
      Update Accounting Table 
      Delete info from Process 
      Commit Tran 
      Rollback Tran (if info contains error(s)) -9 
      Does tran minimize locking? 
      Does tran prevent deadlocks 
      - Does tran prevent access to Processing table 
      Select all that apply:
      A. tran minimize locking
      B. tran prevent deadlocks
      C. tran prevent access to Processing table


      Answer: C
      Abdul:
      BOL: To help minimize deadlocks: - Access objects in the same order. - 
      Avoid user interaction in transactions. - Keep transactions short and in 
      one batch. - Use as low an isolation level as possible. - Use bound 
      connections. 

      Disagreement in the following question!!! 

      72.You have an accounting application that allows users to enter 
      information 
      into a table named Staging. When data entry is complete, a batch job uses 
      the rows in the Staging table to update a table name Production. Each 
      user's rows in the Staging table are identified by the user's SQL Server 
      process ID number in a column named SpID. The code for the batch job that 
      updates the Production table is: 
      DECLARE @Count Int 
      BEGIN TRAN 
      SELECT @Count = COUNT(*) FROM Production p JOIN Staging s 
      ON p.account = s.account WHERE s.SpID = @@SpID 
      UPDATE p SET Amount = s.Amount FROM Production p JOIN Staging s 
      ON p.account = s.account WHERE s.SpID = @@SpID 
      IF @@ROWCOUNT <> @COUNT ROLLBACK TRAN 
      ELSE COMMIT TRAN 
      You find out that there have been locking problems when two users run the 
      batch job at the same time. What should you do to solve the problem? 
      A. Set transaction isolation level to SERIALIZABLE before running batch 
      job. 
      B. Set transaction isolation level to READ UNCOMMITTED before running 
      batch job. 
      C. Set deadlock priority to normal before run. 
      D. Set deadlock priority to low before run. 
      E. Include the table hint with rowlock, UPDLOCK when counting the rows in 
      the production table. 
      F. Include the table hint with TABLOCKX when counting the rows in the 
      production table. 



      Answer:
      Florian, Abdul: A
      Gabio_final: E (Frodo: this sounds reasonable)
      Gabio_final:
      A: Places a range lock on the data set, preventing other users from 
      updating or inserting rows into the data set until the transaction is 
      complete. This is the most restrictive of the four isolation levels. 
      Because concurrency is lower, use this option only when necessary. This 
      option has the same effect as setting HOLDLOCK on all tables in all SELECT 

      statements in a transaction. 
      B: Implements dirty read, or isolation level 0 locking, which means that 
      no shared locks are issued and no exclusive locks are honored. When this 
      option is set, it is possible to read uncommitted or dirty data; values in 

      the data can be changed and rows can appear or disappear in the data set 
      before the end of the transaction. This option has the same effect as 
      setting NOLOCK on all tables in all SELECT statements in a transaction. 
      This is the least restrictive of the four isolation levels. 
      NOLOCK : Do not issue shared locks and do not honor exclusive locks. When 
      this option is in effect, it is possible to read an uncommitted 
      transaction or a set of pages that are rolled back in the middle of a 
      read. Dirty reads are possible. Only applies to the SELECT statement.
      C,D: not resolve the problem
      E:ROWLOCK : Use row-level locks rather than use the coarser-grained page- 
      and table-level locks.
      UPDLOCK
      Use update locks instead of shared locks while reading a table, and hold 
      locks until the end of the statement or transaction. UPDLOCK has the 
      advantage of allowing you to read data (without blocking other readers) 
      and update it later with the assurance that the data has not changed since 

      you last read it.
      F: TABLOCKX Use an exclusive lock on a table. This lock prevents others 
      from reading or updating the table and is held until the end of the 
      statement or transaction.


      73.Evaluate this trigger: 
      USE sales GO CREATE TRIGGER inventory_update ON inventory FOR UPDATE AS IF 

      UPDATE(product_id) BEGIN ROLLBACK TRANSACTION END 
      Which result will this trigger provide? 
      A. Allows a user to update the inventory table 
      B. Prevents a user from modifying product_id values 
      C. Allows a user to update the product_id column 
      D. Prevents a user from updating the inventory table 
      E. Prevents a user from accessing the product_id column 



      Answer: B 


      74.A table design suggests a parent/child relation. Parent table will have 
      1 
      million records, Child will have 100 million. Aggregate data is contained 
      in the child table, Detailed child reporting is occasionally used. What to 

      do? 
      A. Implement the design as is 
      B. Create additional columns in parent 
      C. Create aggregate data columns in parent 
      D. Create constraints to foreign keys in child 
      E. Create indexes to foreign keys in child. 



      Answer: C 
      Gabio_final (on option E): If you create index Insert can be very slowly 
      with large tables.

      Disagreement in the following question!!! 

      75.You are implementing a logical data model for a decision support system 

      (DSS) database. Two of the tables in the model have a parent/child 
      relationship. The parent table is expected to have more than 1 millions 
      rows. The child table is expected to have more than 100 million rows. Most 

      reports present aggregate child information grouped according to each 
      parent row. Reports containing detailed child table information are 
      occasionally needed. How should the tables be implemented? 
      A. Create the parent table and child table as described in the logical 
      data model 
      B. Create the parent table that includes aggregate child information. Do 
      not create child table 
      C. Create the parent table that includes aggregate child information. 
      Create the child table as it exists on the logical data model 
      D. On the child table, create a non-clustered index that includes any 
      columns that are aggregate and the foreign key referenced to the parent 



      Answer:C

      (Gabio_final, Florian (and Frodo!): C
      Abdul: D) 
      Gabio_final:
      DSS systems have low update requirements but large volumes of data. Use 
      many indexes to improve query performance. 
      The point of this question is that you are going to run a report on the 
      child table. The reports are aggregated (sorted) based on the child table. 

      Therefore you want an index on the child table that includes all of the 
      columns of the SELECT statement, known as 'covering the query'. BUT also 
      the report is then sorted by the column(s) of the parent table. You 
      generally want an index on a column or group of columns on which you do a 
      sort. 

      The ideal answer is probably the following: put a clustered index on the 
      parent, put a composite non-clustered index on the child, and make sure 
      there is a foreign key from the parent to the child. Choose the best 
      closest answer on the exam.

      Abdul:
      Imagine that you have a "regular" database like one that takes orders. You 

      will have lots of little transactions throughout the day, and you will do 
      stuff like run a report on all of today's transactions at the end of the 
      day. This means a few records are affected, usually spread out over time. 


      If instead you have a DSS database (or a data warehouse), you will just 
      run a couple of whopper reports on all of the data, but only usually once 
      a day. This is also known as a reporting database. This is the type of 
      database in this question. I actually think that the type of database 
      really isn't all that important in this question. 


      The point of this question is that you are going to run a report on the 
      child table. The reports are aggregated (sorted) based on the child table. 

      Therefore you want an index on the child table that includes all of the 
      columns of the SELECT statement, known as 'covering the query'. BUT also 
      the report is then sorted by the column(s) of the parent table. You 
      generally want an index on a column or group of columns on which you do a 
      sort. 

      The ideal answer is probably the following: put a clustered index on the 
      parent, put a composite non-clustered index on the child, and make sure 
      there is a foreign key from the parent to the child. Choose the best 
      closest answer on the exam


      76.The sales database contains a customer table and an order table. For 
      each 
      order there is one and only one customer, and for each customer there can 
      be zero or many orders. How should primary and foreign key fields be 
      placed into the design of this database? 
      A. A primary key should be created for the customer_id field in the 
      customer table and also for the customer_id field in the order table. 
      B. A primary key should be created for the order_id field in the customer 
      table and also for the customer_id field in the order table. 
      C. A primary key should be created for the customer_id field in the 
      customer table and a foreign key should be created for the customer_id 
      field in the order table. 
      D. A primary key should be created for the customer_id field in the 
      customer table and a foreign key should be created for the order_id field 
      in the order table. 



      Answer: C 


      77.If you attempt to create a stored procedure using the name of an 
      existing 
      stored procedure in the database you will get an error. Therefore, when 
      writing a script to create a stored procedure it is important to check for 

      an existing stored procedure with the same name and drop it if it exists. 
      Assuming that you are the database owner, which of the following SQL 
      batches will drop a stored procedure named sp_myprocedure from the current 

      database? 
      A. SELECT FROM sysprocedures WHERE name = 'dbo.sp_myprocedure' IF 
      @@ROWCOUNT >= 1 THEN DROP PROCEDURE dbo.sp_myprocedure 
      B. SELECT FROM sysobjects WHERE name = object_name() IF @@ROWCOUNT = 1 
      THEN DROP PROCEDURE dbo.sp_myprocedure 
      C. IF EXISTS (SELECT FROM sysprocedures WHERE id = 
      object_id('dbo.sp_myprocedure')) DELETE PROCEDURE dbo.sp_myprocedure FROM 
      sysobjects 
      D. IF EXISTS (SELECT FROM sysobjects WHERE id = 
      object_id('dbo.sp_myprocedure')) DROP PROCEDURE dbo.sp_myprocedure 



      Answer: D 
      Gabio_final:
      A: invalid object
      B: The object_name function requires 1 arguments.
      C: DELETE PROCEDURE  incorrect syntax


      78.In your results you want to display the character string 'The name of 
      this 
      product is' immediately before the product name. Which of the following 
      SQL SELECT statements could you use? 
      A. SELECT 'The name of this product is', prodname FROM products 
      B. SELECT 'The name of this product is ' & prodname FROM products 
      C. SELECT 'The name of this product is ' + prodname FROM products 
      D. SELECT (The name of this product is), prodname FROM products 



      Answer: C 
      Gabio_final: 
      A: result set will contain 2 columns
      B: Invalid operator for data type
      D: wrong syntax


      79.In the Pubs database the TitleAuthor table is used to define the 
      many-to-many relationships between authors and books. Which of the 
      following SQL SELECT statements will show which books (titles) have more 
      than one author? 
      A. SELECT DISTINCT au_id, title_id FROM titleauthor WHERE title_id = 
      title_id AND au_id <> au_id 
      B. SELECT DISTINCT title_id FROM titleauthor WHERE title_id(1) = 
      title_id(2) AND au_id(1) <> au_id(2) 
      C. SELECT DISTINCT au_id, title_id FROM titleauthor t1, titleauthor t2 
      WHERE t1.title_id = t2.title_id AND t1.au_id <>t2.au_id 
      D. SELECT DISTINCT t1.title_id, t2.title_id FROM titleauthor t1, 
      titleauthor t2 WHERE t1.title_id = t2.title_id AND t1.au_id <> t2.au_id 



      Answer: D
      Abdul:
      Same book but the authors are different. 
      You want to join a table to itself and look for titles that are the same, 
      but the authors are different -- another way of saying more than one 
      author on a book. So the WHERE clause in C and D contains exactly that. 
      The difference is that in C for each book, you SELECT all of the authors 
      as well. But in D you just SELECT the books, which is what the question 
      asked.


      80.You are DBA at photo-reseller shop. Your shop buys photographs from 
      organizations, individual photographers and occasionally from your 
      customers. You also have photographers on staff. From time to time 
      individual independent photographers become your employees and sometimes 
      leave for independent careers. You create the following tables: 
      INDIVIDUAL with IndividualID as Indentity - Primary Key, and with Employee 

      as boolean 
      ORGANIZATION with OrganizationID as Indentity - Primary Key 
      INDIVIDUALORGANIZATION with IndividualID - foreign key references 
      INDIVIDUALS.IndividualID, and OrganizationID - foreign key references 
      ORGANIZATIONS.OrganizationID 
      (Select all that apply): 
      A. You can track Individual as Employee, Customer or Independent 
      Photographer 
      B. IndividualID can never be equal to OrganizationID 
      C. No data redundancy 
      D. Individual may be related to one or more organizations 
      E. Individual can be related to another individual 


      Answer: C, D 
      Gabio_final:
      A,: Boolean cannot tell the type of individual
      B,: There are no constraints or triggers
      C,: correct
      D,: many to many relations
      E,:no relationship between between Individual table itself 


      82.You are designing a data model to track research projects. A project 
      might 
      be undertaken in one research institution or in multiple institutions, 
      some research institutes are part of a university or a large research 
      institutes. Each project is assigned a group of Scientists who might 
      perform different jobs in each project. A scientist who is a member of the 

      staff at one institute might also be assigned to a project that is being 
      undertaken at another institute. 
      Exhibit : 
      Project: (PK) projectID Institutes: instituteID 
      Projectinstitution : (PK) instituteID, projectID 
      ScientistProject: (PK) scientistID, projectID, jobname Scientist: (PK) 
      scientistID 
      Job: (PK) jobID 
      - No self join on Institution table 
      - No direct relation between Scientist and Institution Projectinstitution 
      TB: projectid, institutionID ----> Institution TB: instituteID 
      Projectinstitution TB: projectID, institutionID ----> Project TB: 
      projectID Projectscientist TB: projectID, scientistID, jobname ----> 
      Project TB: projectID Projectscientist 
      TB: projectID, scientistID, jobname ----> Job TB: jobID 
      Projectscientist TB: projectID, scientistID, jobname ----> Scientist TB: 
      scientistID You want to accomplish the following goals: 
      - Ensure that all the scientist conducting research for any specific 
      project can be reported. 
      - Ensure that a scientist's job for a specific project in a specific 
      institute can be reported. 
      - Ensure that all of the institute participating in any specific project 
      can be reported. - Ensure that the institute at which a scientist is a 
      staff member can be tracked. 
      - Ensure that an institute can be identified as part of another institute. 

      What task will be achieved (choose all that apply): 
      A. All of the scientists conducting research for any specific project can 
      be reported 
      B. A scientist job for a specific project in a specific institution can be 

      reported 
      C. All of the institutes participating in any specific project can be 
      reported 
      D. The institute at which a scientist is a staff member can be tracked 
      E. An institute can be identified as part of another institute 



      Answer: A, B, C 
      Gabio_final:
      A: ScientistProject is a link table from Project and Scientist
      B: job name in ScientistProject table.
      C: Projectinstitution is a link table from Project and Institution
      D: no such information is available
      E: No self-join on Institution table 

      Frodo: I might be stupid, but it took a while before a realized TB stand 
      for Table (duh).


      83.Scientists in a research institute: 
      Exhibit: 
      Project: (PK) projectID, instituteID 
      Institute: (PK) instituteID, scientistID, InstituteName, etc. 
      Scientist: (PK) scientistID, jobID 
      Job: (PK) jobID 
      (Choose all that apply): 
      A. Can you trace scientists who work for a particular project? 
      B. Can you tell which scientist belongs to which institute? 
      C. Can an institute belong to another institute? 
      D. Can you identify how many jobs a particular project consists of? 



      Answer: A, B, D 



      84.Scientists in a research institute, Institute may have parent 
      organization/institute. Scientists might work for more than one institute. 


      Exhibit: 
      Project: (PK) projectID, instituteID 
       Institutes: instituteID, instituteID2 
       InstituteScientist: (PK) instituteID, scientistID 
       ScientistProject: (PK) scientistID, projectID 
       Scientist: (PK) scientistID, jobID 
       Job: (PK) jobID, jobdescr 
      A. Can you trace scientists who work for a particular project? 
      B. Can you tell which scientist belongs to which institute? 
      C. Can an institute relate to another institute? 
      D. Can you identify how many jobs a particular project consists of? 
      Answer: B, C, D 

      Frodo: Maybe my IQ is too low  but I cant understand how you can 
      conclude the answers from the information given.
      A,: How can this be false, especially if D is true???? Use the 
      ScientistProject table!
      B,: I understand this one. Just use InstituteScientist table.
      C,: This is easy. The Institutes table does this.
      D,: To calculate this you need to use the Job, Scentist and 
      ScientistProject tables. Does this even work? 

      Maybe I am real dumbass, but I would answer A, B, C if I get the exact 
      wording on the exam (not very likely though).


      85.Your database includes tables that are defined as follow: 
      CREATE TABLE Salesperson ( SalespersonID Int IDENTITY (1,1) NOT NULL 
      PRIMARY KEY NONCLUSTERED, RegionID Int NOT NULL, LastName varchar(30) 
      NULL, FirstName varchar(30) NULL, MiddleName varchar(30) NULL, AddressID 
      Int NULL) 
      CREATE TABLE Orders ( OrderID Int IDENTITY (1,1) NOT NULL PRIMARY KEY 
      NONCLUSTERED, SalespersonID Int NOT NULL, RegionID Int NOT NULL, OrderDate 

      Datetime NOT NULL OrderAmount Money NOT NULL) 
      You need to produce a list of the highest sale for each salesperson on 
      September 15, 1998. 
      The list is to be printed in the follwing format: 
      Last Name First Name Order Date Order Amount 
      ------------------------------------------------------------ 
      Which query will accurately produce the list? 
      A. SELECT lastname, firstname, orderdate, MAX(orderamount) FROM 
      salesperson s INNER JOIN Orders o on s. salesPersonID = o.salespersonid 
      AND orderdate = '9/15/98' group by lastname, firstname, orderdate 
      B. SELECT lastname, firstname, orderdate, MAX(orderamount) FROM 
      salesperson s RIGHT OUTER JOIN Orders o on s. salesPersonID =0. 
      salespersonid AND orderdate = '9/15/98' GROUP BY lastname, firstname, 
      orderdate 
      C. SELECT lastname, firstname, orderdate, MAX(orderamount) FROM 
      salesperson s LEFT OUTER join Orders os on s.salesPersonID =0. 
      salespersonid AND orderdate = '9/15/98' group by lastname, firstname, 
      orderdate 
      D. SELECT lastname, firstname, orderdate, MAX(orderamount) FROM 
      salesperson s LEFT OUTER join Orders os on s.salesPersonID =0. 
      salespersonid WHERE orderdate = '9/15/98' group by lastname, firstname, 
      orderdate 



      Answer: C 
      Abdul: LEFT OUTER JOIN to get each salesperson from the Salesperson table. 

      Use 'AND' to limit the recordset. Use Left because can be a person who 
      didnt sold nothing


      86.Your database includes a table named SalesInformtion that tracks sales 
      by 
      region. The table is defined as follows: 
      CREATE TABLE SalesInformation ( SalesInformationID Int IDENTITY(1,1) NOT 
      NULL PRIMARY KEY NONCLUSTERED, SalesPersonID Int NOT NULL, RegionID Int 
      NOT NULL, ReceiptID Int NOT NULL, SalesAmount Money NOT NULL) 
      Your database also includes a table named SalesPerson that is defined as 
      follows: 
      CREATE TABLE SalesPerson ( SalesPersonID Int IDENTITY(1,1) NOT NULL 
      PRIMARY KEY NONCLUSTERED, RegionID Int NOT NULL, LastName Varchar(30) 
      NULL, FirstName Varchar(30) NULL, MiddleName Varchar(30) NULL, AddressID 
      Int NULL) 
      You want to ensure that each salesperson enter sales only in the 
      salesperson's own region. Which of the following actions can you perform 
      to accomplish this task? 
      A. Create a foreign key on SalesInfo (regionId) which references 
      SalesPerson table 
      B. Create a foreign key on SalesPerson (regionId) which references 
      SalesInfo table 
      C. Create a trigger on SalesInformation table that will verify that the 
      region for the sale is the same as the region for the salesperson 
      D. Create a trigger on SalesPerson which will check if regionId is same as 

      SalesInfo RegionId 



      Answer: C 
      Gabio_final: D: No good as only time the trigger will be fired when a new 
      salesperson is added. 


      87.Your database is used to store information about each employee in your 
      company and the department in which each employee works. An employee can 
      work in only one department. The database contains two tables, which are 
      named Department and Employee. The tables are modeled as shown in the 
      exhibit: 
      Department: DepartmentId, DepartmentName, EmployeeId 
      Employee: EmployeeId, SSN, DepartmentId, DepartmentName, blah, blah 
      You want to ensure that all data stored is dependent on the whole key of 
      the table in which the data is stored. What should you do? 
      A. Add EmployeeId to Department table 
      B. Remove SSN from Employee table 
      C. Remove DepartmentName from Employee table 
      D. Remove DepartmentName from Department table 



      Answer: C
      Abdul:
      a- would add a duplicate column in Department.
      b- SSN is dependant on EmloyyeID, no benefit to remove it and if you did 
      where would you store these details
      c- DepartmentName surely must be dependant on departmentID so removing it 
      from the employee table, will follow the normalization rule
      d- DepartmentName surley is dependant on DepartmentID , no benefit to 
      remove it and if you did where would you store these details

      Normalization is the process of refining tables, keys, columns and 
      relationships to create a consistent database design. Normalization is 
      acheived by applying a number of tests to tables. Three levels of 
      normalization (first, second and third normal form) are commonly applied, 
      although others are defined.


      88.You are designing the data model to maintain information about students 

      living in a group home. A student might be known under many aliases, and 
      the data model needs to associate each student with aliases and other 
      descriptors such as hair color, weight, religion, physical handicaps, and 
      ethnicity. Some students have siblings or other relatives who are also in 
      the group home, and these family relationships need to be tracked. 
      Multiple addresses might be associated with an individual student, such as 

      the current address, a school address, and addresses of significant 
      relatives. You can also need to track significant events in a student's 
      life, which might include attendance at a special training session, 
      medical treatment, a graduation, or a death in the family. 
      You want to accomplish the following goals: 
      - Ensure that any kind of descriptor can be associated with a student. 
      - Ensure that multiple addresses can be associated with multiple students, 

      and that the address usage can be reported. 
      - Ensure that any family relationship with another student can be 
      reported. 
      - Ensure that all known aliases for a student can be reported. 
      - Ensure that significant events in a student's life can be reported. You 
      design the logical model as shown in the exhibit. 
      Which result or results does this model produce? (Choose all that apply.) 
      A. Table A - Student: StudentId Table B - Family Relationship: StudentId1, 

      StudentId2 (composite) Table C  StudentEvent: StudentId, EventName 
      (composite) Table D - StudentAlias: StudentId, Alias (composite) Table E - 

      Address: StudentId, Address StudentId in B-D reference StudentId in table 
      A as foreign key 
      A. Any kind of descriptor can be associated with a student. 
      B. Multiple addresses can be associated with multiple students, and the 
      address usage can be reported. 
      C. All family relationship with another student can be reported. 
      D. All known aliases for a Student can be reported. 
      E. Significant events in a student's life can be reported. 



      Answer: C, D, E 
      Abdul:
      A,: No flexible text type like VarChar. 
      B,: 1-M. So, cannot accept many students with multiple addresses. 
      C,: 1-M, then, M-1. Yes. 

      Disagreement in the following question!!! 

      89.You have to convert legacy database to SQL 7. Legacy database has: Name 
      as 
      Last & First (one column containing both names) SQL has: EmployeeID 
      IDENTITY LastName, FirstName (2 separate columns with the names) Select 
      the best option: 
      A. DTS, transfer as is without preprocessing 
      B. BCP with /E 
      C. BCP, but before preprocess text legacy file and split name into two: 
      Last and First 
      D. DTS, transfer with option to allow identity insert 
      E. Something outside of normal people understanding, not related to both 
      DTS and BCP 



      Answer: 
      Abdul, Frodo: A
      Florian: D 
      Gabio_final: C (Frodo: no way! Microsoft doesnt want use to use BCP. DTS 
      rules!)
      Abdul: 
      The real question will presumably be understandable. Seems to me the issue 

      is, can DTS break apart the first and last names from the legacy database 
      on the fly while importing into SQL. The answer is definitely Yes! Using 
      DTS you can set up transformations. These transformations can include a 
      field from a source table being mapped to a field in the destination 
      table. During the transformation, you can do any sort of ActiveX Script 
      (usually Visual Basic) programming you want. Just selecting all of the 
      characters _before_ the space would (almost always) give you the first 
      name. Now the question is: can you transform that one column from legacy 
      (with both names) to two different columns in the destination table? 
      Yessir! Just tried it and it works. So now all you have to do is, on 
      second transform of the column, make sure your ActiveX script selects all 
      the characters _after_ the space, giving you the last name. Done. Easy! 

      But, the answers above don't give that as an answer, dammit! 

      If answer A on the real test says 'without preprocessing' but "do whatever 

      the heck you want in DTS", then that _would_ be the answer. 

      Answer B is not relevant. 

      Answer C definitely would work, but would suck. Big time. How are you 
      gonna do your preprocessing? In Word? On what, 10 million records? By 
      hand? Theoretically this could be done. Hey wait a minute! If you just 
      exported that one column only from the legacy database and opened it into 
      Word you would have each paragraph as "FIRSTNAME space LASTNAME 
      CarriageReturn". So just search for 'space' and replace with either tab or 

      comma, now you have a delimited text file. Presuming you could insert that 

      back into your legacy database, or into SQL, in the exact same order, this 

      would take about 5 minutes if you go slow. So C could work, depends on the 

      wording of the question. C would suck a lot more than A, and I don't think 

      the SQL test is testing us on Office, DTS is a much more likely answer! 

      Answer D again refers to Identity insert, which I can't see as relevant in 

      any way. 

      DTS can map 1 column to 2 columns 

      Disagreement in the following question!!! 

      90.You have an Accounting application that captures batchs of transactions 

      into a staging table before being processed. Processing can be performed 
      on individual batches or on the whole staging table. Processing includes 
      many validations before updating any of the production tables. You want to 

      accomplish the following goals: 
      - Avoid deadlocks, or ensure the deadlocks are handled appropriately. 
      - Ensure that each batch of transactions is accepted or rejected. 
      - Allow other users to access the production tables while accounting 
      transactions are being processed. 
      - Minimize reuse locking. 
      You take following actions: 
      - Begin a transaction. 
      - Validate each rows into production tables as appropriate by using 
      INSERT... SELECT statement. 
      - Update and delete existing producting tables as appropriate by joining 
      production tables to the staging tables. 
      - Commit the transaction. - Roll back the transaction if any errors are 
      encounted. (Choose all that apply): 
      A. Deadlocks are avoided or handled appropriately 
      B. Each batch of transaction is accepted or rejected 
      C. Users can access the production tables with accounting transaction are 
      being preceede 
      D. Resource locking is minimized 



      Answer: 
      Florian, Abdul: B, D
      Gabio_final:B, C, D 
      Abdul:
      A - deadlocks are not avoided and SQL server will handle them as it sees 
      fit i.e. Terminate the tranaction that has been running for the least 
      amount of time.

      B- Yes Commit or Rollback

      c- Yes. However they may have to wait for locks depending on what they 
      want to do. Can read uncommitted (dirty reads) need more info to comment 
      correctly.

      d- Not really as the transaction is done in one large bulk, maybe could be 

      split, ie. Do a line at a time with commit.


      91.You must write a stored procedure to perform cascading deletes on 
      HomeLoan 
      database. The client application will pass a parameter containing the 
      CustomerID of the customer to be deleted. A customer might have any number 

      of pending loans. Each pending loan has one or more inspections and one or 

      more appraisals associated with it. The diagram in the exhibit shows the 
      relationship of the tables. 
      Table(customer) Table(loan) Table(inspection) Table(appraisal) PK: 
      customerid PK: loanid PK: inspectionid PK: appraisalid lastname R loandate 

      R loanid loanid firstname loanstatusid inspectionresultid appraisalamount 
      addressid appraisaldate inspectiondate appraisalid statusid 
      appraisalamount inspectiontypeid country loanamount customerid 
      Which stored procedure should you use? 
      A. 
      CREATE PROCEDURE loancascadedelte 
      @customerid int AS 
      DELETE FROM appraisal 
      FROM appraisal JOIN LOAN 
      ON appraisal.loanid=loan.loanid 
      WHERE customerid=@customerid
      DELETE FROM inspection 
      FROM inspection JOIN LOAN 
      ON inspection.loanid=loan.loanid 
      WHERE customerid=@customerid
      DELETE FROM loan 
      WHERE customerid=@customerid 
      DELETE FROM customer 
      WHERE customerid=@customerid 
      B. 
      CREATE PROCEDURE loancascadedelte 
      @customerid int 
      as 
      DELETE FROM appraisal 
      FROM appraisal JOIN customer 
      ON loan.customerid=customer.customerid 
      DELETE FORM inspection FROM inspection 
      JOIN customer ON loan.customerid=customer.customerid 
      JOIN customer on loan.customerid=customer.customerid 
      DELETE FROM customer 
      WHERE customerid=@customerid 
      C. Create procedure loancascadedelete @customerid int as declare @loanid 
      int declare @loan_cur cursor local For select loanid from loan where 
      custoemrid=@customerid for read only open loan_cur fetch next from 
      loan_cur into @loanid while @@fetch_status=0 Begin Delete appraisal where 
      loanid=@loanid Delete inspection where loanid=@loanid delete loan where 
      loanid=@loandid fetch next from laon_cur Into @loanid End Delete customer 
      where customerid=@customerid close loan_cur deallocate loan_cur 
      D. Create procedure loancascadedelete @customerid int as declare @loanid 
      int declare @loan_cur cursor For select loanid from loan where 
      custoemrid=@customerid for read only open loan_cur fetch next from 
      loan_cur into @loanid while @@fetch_status=0 Begin Delete appraisal where 
      loanid=@loanid Delete inspection where loanid=@loanid delete loan where 
      loanid=@loandid fetch next from laon_cur Into @loanid End Delete customer 
      where customerid=@customerid close loan_cur deallocate loan_cur 
      Answer: A 
      Frodo:
      During the exam, my thinking was that there were some extra unnecessary 
      JOINS in B. Cant read dump to understand the exhibit.


      92.You are building an Invoicing system for your company. On each invoice, 

      you want to show the following information: Customer Number, Customer 
      Name, Customer Address, Customer City, Customer Territory, Customer Postal 

      Code, Part Number, Part Description, Part shipping weight, Current cost, 
      Current price, Quantity on hand, Order Number, Order Date, Part Number, 
      Quantity There can be multiple parts on one invoice. Cost and information 
      comes from a master list, but the history stored for each invoice should 
      show the cost and price at time of sale. You want to make sure your 
      database is properly normalized. 
      You want to accomplish the following: 
      - Every table must have a primary key. 
      - All non-key columns must depend on the whole primary key. 
      - All columns must contain exactly one value. 
      - Each column in a table must be independent of any non-key column in the 
      same table. You create the logical model as shown in exhibit: 
      Customers: CustomerNo(PK), CustomerName, etcK 
      Part: PartNo(PK), PartDescription, UnitWeight, etcK 
      Orders: OrderNo(PK), OrderDate 
      OrderDetail: OrderNo(PK), ItemNo(PK), CustomerNo(FK), PartNo(FK), 
      PartDescription, QuantitySold, UnitWeight, etc... Which result or results 
      does this model produce? (Choose all that apply) 
      A. Every table has a primary key 
      B. All non-key columns depend on the whole primary key 
      C. All columns contain exactly one value 
      D. Each column in a table is independent of any non-key column in the same 

      table 



      Answer: A, C 
      Abdul:
      A: every table has a PK
      B: there is redundant/repeated data (PartDescription)
      C: If Customer Name is one column this would be wrong the customer name 
      should be split into at least firstname, lastname etc
      D: If you had a table with ORDERID, PARTNO, PARTDESCRIPT, etc then 
      PARTDESCRIPT is dependant on PARTNO so you would create another table, 
      which contained descriptions and referred the order table using PARTNO one 

      to many relationships. In table OrderDetail PartDescription is dependant 
      on PartNo, which is not key 
      No decomposable columns. All columns describe a single attribute. For 
      example, the Customer Name column can be split into first name and last 
      name columns

      92. (Frodo: another 92)
      You are developing a Personnel database for your company. This database 
      includes a employee table that is define as follows: 
      CREATE TABLE Employee ( Id int IDENTITY NOT NULL, Surname varchar(50) NOT 
      NULL, FirstName varchar(50) NOT NULL, SocialSecurityNo char(10) NOT NULL, 
      Extension char(4) NOT NULL, EmailAddress varchar(255) NOT NULL) 
      Your company provide each employee with a telephone extension number and 
      an e-mail address. Each employee must have a unique telephone extension 
      and a unique e-mail address. In addition, you must prevent duplicate 
      SocialSecurityNo from being entered into the database. 
      How can you alter the table to meet all of the requirements? 
      A. ALTER TABLE Employee ADD CONSTRAINT u_nodups UNIQUE NONCLUSTEDRED( 
      SocialSecurityNo, Extension, EmailAddress) 
      B. ALTER TABLE Employee ADD CONSTRAINT u_nodupssn UNIQUE NONCLUSTERED 
      (SocialSecurityNo) ALTER TABLE Employee ADD CONSTRAINT u_nodupext UNIQUE 
      NONCLUSTERED (Extension) ALTER TABLE Employee ADD CONSTRAINT u_nodupemail 
      UNIQUE NONCLUSTERED (EmailAddress) 
      C. ALTER TABLE Employee ADD CONSTRAINT u_nodupssn CHECK 
      (SocialSecurityNo='UNIQUE') ALTER TABLE Employee ADD CONSTRAINT u_nodupext 

      CHECK (Extension='UNIQUE') ALTER TABLE Employee ADD CONSTRAINT 
      u_nodupemail CHECK (EmailAddress='UNIQUE') 



      Answer: B 
      Abdul:
      A is out because of the grouping of the fields. 
      C creates the constraints but checks to see if the entry equals the 
      literal "Unique". 

      ||MORE ||

      A- This would define a composite key, making each row unique, but not each 

      column value.
      B- Agree
      C- Adds a check constraint that ensures that each value actually = 
      'UNIQUE' as a string.

      SOME FURTHER NOTES:

      Consider the following facts when you apply a UNIQUE constraint

      It can allow null values.
      You can place multiple UNIQUE constraints on a table.
      You can apply the UNIQUE constraint to one or more columns that must have 
      unique values but are not the primary key of a table.
      The UNIQUE constraint is enforced through the creation of a unique index 
      on the specified columns.
      The UNIQUE constraint defaults to using a unique no clustered index unless 

      a clustered index is specified.


      93.You have a database that keeps track of membership in an organization. 
      Tables in this database include the Membership table, the Committee table, 

      the Address table, and the Phone table. When a person resigns from the 
      organization, you want to be able to delete the membership row and have 
      all related rows be automatically removed. What can you do to accomplish 
      this task? 
      A. Create DELETE trigger on Membership table that deletes any rows in 
      Committee table, Address table and Phone tables that reference the Primary 

      Key in the Membership table. Do not place a FOREIGN KEY constraint on the 
      ommittee, Address & Phone tables. 
      B. Create delete trigger on MemberShip table that deletes any rows in 
      Committee table, Address & Phone table that reference the Foreign Key on 
      Committee, Address & Phone. 
      C. Place a PK CONTAINS on MemberShip table with FK CONTAINS on Committee, 
      Address & Phone. 
      D. Place PK on MemberShip. Place FK on Committee, Address & Phone that 
      reference the PK in Member_Ship table. Create delete trigger on Committee, 

      Address & Phone will fire when FK CONTAINS are violated. 



      Answer: A 

      Frodo (on B): Foreign Key Constraints are evaluated before the triggers. 
      Therefore they cant be used here.


      94.Evaluate this statement: 
      USE hr SELECT department_id, SUM(salary) FROM employee GROUP BY 
      department_id HAVING emp_id > 2001 Which clause will cause the statement 
      to fail? 
      A. SELECT department_id, SUM(salary) 
      B. FROM employee 
      C. GROUP BY department_id 
      D. HAVING emp_id > 2001 



      Answer: D
      When using the HAVING clause, you can only reference columns 
      used in the SELECT list.


      95.You are designing an Insurance database. The Policy table will be 
      accessed 
      and updated by several additional applications. In the Policy table, you 
      need to ensure that the value entered into the Beginning_Effective_Date 
      column is less than or equal to the value entered into the 
      Ending_Effective_Date olumn. What should you do? 
      A. Process each application to compare the values before updating the 
      POLICY table 
      B. Create a CHECK CONSTRAINTS on POLICY table that compares the value 
      C. Create a rule & bind the rule to the Beginning_effective_date column 
      D. Create Insert & Update triggers on the POLICY table that compare the 
      values 



      Answer: B 
      Gabio_final: check constraints can reference another column in the same 
      table.


      96.Your company uses an application, named Application Z that is run daily 

      after business hours. As your company expands its business, data becomes 
      more dynamic, and management wants the report to be produced every 2 
      hours. Multiple applications concurrently access SQL Server during the 
      day. Which two of the following modifications to Applications Z should you 

      implement to produce the reports more quickly without preventing other SQL 

      Server applications from performing their tasks? 
      A. Create more indexes on the columns that Application Z uses to calculate 

      aggregates 
      B. Implement trigger to recalculate aggregates each time the appropriate 
      columns updated 
      C. Specified the NOLOCK hint in the SELECT statements 
      D. Specify the LOW deadlock priority for App. Z 
      E. Set the SERIALIZABLE Transaction isolation level 



      Answer: B, C 
      Frodo: Need speed, so sacrifice some accuracy (=dirty reads). Therefore C.


      97.You are troubleshooting a process that makes use of multiple complex 
      stored procedures that operate on a table. The process is producing 
      unexpected update to the table. You want to identify the specific stored 
      procedure and statement that are causing the problem. What should you do? 
      A. Use SQL profiler to create and replay a trace by using single stepping 
      B. Place trigger on table to send email message when column is set to 
      specific value. 
      C. Execute stored procedure, create process statement to verify syntax is 
      valid. 
      D. Examine transaction log to locate the statement that made the 
      unexpected update 



      Answer: A 


      98.You are developing a sales database for a company that has a 100-person 

      sales staff. The company's policy requires that any sales orders in excess 

      of $100,000 be approved and entered into the database by the sales 
      manager. Your database includes a SalesOrder table that is defined as 
      follws: 
      CREATE TABLE SalesOrder ( Number Char(10) NOT NULL, SalesPerson 
      VarChar(50) NOT NULL, Amount Money NOT NULL) 
      You need to create a view on the SalesOrder table that will prevent the 
      sales staff from entering a sales order in excess of $100,000. Which view 
      should you write? 
      A.
      CREATE VIEW SalesOrderLimit AS 
      SELECT Number, SalesPerson, Amount 
      FROM SalesOrder 
      WHERE Amount<=100000
      B. 
      CREATE VIEW SalesOrderLimit AS 
      SELECT Number, SalesPerson, Amount 
      FROM SalesOrder 
      WHERE Amount<=100000 
      WITH CHECK OPTION 
      C.
      CREATE VIEW SalesOrderLimit AS 
      SELECT Number, SalesPerson, Amount 
      FROM SalesOrder 
      HAVING Amount<=100000
      D. 
      Some silly thing with TOP


      Answer: B
      Gabio_final: 
      WITH CHECK OPTION 
      Forces all data modification statements executed against the view to 
      adhere to the criteria set within select_statement. When a row is modified 

      through a view, the WITH CHECK OPTION guarantees that the data remains 
      visible through the view after the modification has been committed. 


      99.Table has a primary key and full textsearch enabled. They give you the 
      names of all the columns but you want to be able to search for a string in 

      all columns. 



      Answer: Freetext- or- Contain 
      Abdul:
      When you do full-text search on a table that has full-text searching 
      enabled, you can use either CONTAINS or FREETEXT. With either one, you 
      _must_ specify a table. You then have the option to specify columns, or 
      you can use and all columns in the table that have full-text searching 
      enabled will be searched. This will only work if you have full-text search 

      enabled, but shows the syntax: 

      SELECT 
      FROM titles 
      WHERE FREETEXT 'computer'


      100.You want to create a development database that will hold 20MB of data 
      and 
      indexes and a 4MB transaction log. There are no concerns regarding query 
      performance or log placement with this database. The SQL Server was 
      installed on drive E of the server computer, and there is plenty of disk 
      space on drive E. How would you create the database? 
      A. ?
      B. ??
      C. 
      CREATE DATABASE development ON PRIMARY ( name = development1, filename = 
      'e:\mssql7\data\development1.mdf', size = 20MB) LOG ON ( name = 
      developmentlog1, filename = 'e:\mssql7\data\developmentlog1.ldf', size = 
      4MB)
      D. CREATE DATABASE development (,size = 24MB)

      Answer: C 


      101.A table in the third normal form contains a PK column, 6 foreign keys, 
      and 
      3 numeric columns. How to implement? 
      A. Make 2 tables, one with PK and 3 numeric columns, one 6 foreign keys 
      B. Implement as it is 
      C. Add all related information to the 6 foreign keys to the table 
      D. some stupid constraint 



      Answer: B 

      102. (Frodo: More accurate version of 101. 111 in Abdul, Florian)
      You are implementing a logical data model for an online processing (OLTP) 
      application. One entity from the logical model in 3rd normal form 
      currently has 10 attributes. 
      - One attribute is the primary key. 
      - Six of the attributes are foreign key referencing into six other 
      entities. 
      - The last three attributes representing columns that hold Numeric values. 


      How should this entity from the logical model be implemented? 

      A. Create table by denormalizing the entity. Add the information from the 
      six foreign key referencing as additional column of the table. 
      B. Create two tables by denormalizing the entity add three primary key & 
      three numeric values as column of the one table. At the primary key & the 
      information from the six foreign key as column of the other table. 
      C. Create the table as described in the logical data model. 
      D. Create a view that the six foreign keys referenced. 



      Answer : C
      Gabio_final: read normalization forms.


      103.How is an ALTER PROCEDURE statement similar/different than a CREATE 
      PROCEDURE? 


      Answer: The- only -difference -is -that -the- ALTER -PROCEDURE-command 
      -alters- a 
      previously created procedure. It does not effect procedures which might 
      depend on the one being altered not does it effect the current permissions 

      assigned. 


      104.Your database includes a table that is defined as follows: 
      CREATE TABLE SalesInformation ( SalesInformation_ID Int IDENTITY (1,1) NOT 

      NULL, SalesPerson_ID Int NOT NULL, Region_ID Int NOT NULL, Receipt_ID Int 
      NOT NULL, Salesamount Money NOT NULL) 
      You want to populate the table with data from an existing application that 

      has a numeric primary key. In order to maintain the referential integrity 
      of the database, you want to preserve the value of original primary key 
      when you convert the data. What can you do to populate the table? 
      A. Set IDENTITY_INSERT option to OFF & then insert the data by using a 
      SELECT statement that has a column list 
      B. Set IDENTITY_INSERT option to ON & then insert the data by using a 
      SELECT statement that has a column list 
      C. Insert data by using a SELECT statement that has a column list, and 
      then ALTER TABLE to add Foreign Key 
      D. Insert data by using a SELECT statement that has a column list, and 
      then ALTER TABLE to add Primary Key 



      Answer: B 


      105.Your users report slow response time when they are modifying data in 
      your 
      transaction processing application. Response time is excellent when users 
      are merely retrieving data. The search criteria used for modifying data 
      are the same as the search criteria for retrieving data. All transactions 
      are short and follow standard guidelines for coding transactionss. You 
      monitor blocking locks, and you find out that they are not causing the 
      problem. What is the most likely cause of the slow response time when 
      modifying data? 
      A. The transaction log is placed on an otherwise busy disk drive 
      B. The transaction log is nearly full 
      C. The checkpoint process is set too short 
      D. The tempDB database is too small 
      E. The tempDB database is on the same physical disk drive as the database 



      Answer: A 
      Gabio_final:
      BOL : Do not place the transaction log file(s) on the same physical disk 
      with the other files and filegroups. 

      Disagreement in the following question!!! 

      106.Your shipping company has a database application that maintains an 
      inventory of items on each vessel. When each vessel is unloaded at its 
      destination, the inventory is counted, and the Arrived_Quantity column is 
      updated in the database. There can be thousands of vessels en route at any 

      one time. Each shipment is identified by a Shipment_ID. Each vessel can 
      carry thousands of items. Each item in a shipment is identified by an 
      item_number. You want to make sure the update of the arrived_quantity is 
      fast as possible. What should you do? 
      A. Create nonclustered index on the Shipment_ID column, the Item_Number 
      column & Arrived_Quantity column 
      B. Create clustered index on the Shipment_ID column, the Item_Number 
      column & Arrived_Quantity column 
      C. Create clustered index on the Shipment_ID column & the Item_Number 
      column 
      D. Create nonclustered index on the Shipment_ID column, & the Item_Number 
      column 



      Answer:C

      (Gabio_final: D
      Abdul, Florian: C)
      Frodo:. As fast as possible implies clustered (C). If it was something 
      more general like improve query performance I would choose D. Microsoft 
      likes nonclustered indexes in WHERE statements. One factor is the size of 
      index. Frodos gut thinks that C could degrade performance because of the 
      huge clustered index size.

      Disagreement in the following question!!! 

      107.The user wants to find out all those, which had some sales. The SQL 
      command given is: SELECT title_id FROM titles WHERE title_id = (SELECT 
      title_id FROM sales) The above SQL is: 
      A. Outstanding 
      B. Adequate 
      C. Seems to be ok will not work 
      D. It is wrong and will not work 



      Answer:D

      (Gabio_final, Frodo: D 
      Florian, Abdul: A)
      Gabio_final:
      Try in SQL Analyzer: Subquery returned more than 1 value. This is not 
      permitted when the subquery follows =,! =, <, <= , >, >= or when the 
      subquery is used as an expression. The solution could be :
      SELECT title_id FROM titles WHERE title_id IN (SELECT title_id FROM sales)
      Frodo agrees: A is clearly wrong.


      108.You are designing a data model for a DSS system containing two tables 
      (parent child relationship). The child table holds thousands more rows 
      than the parent. The fields in the parent table will be queried the most. 
      How do you implement the design? 
      A. One table (the parent) 
      B. One table: the parent that includes the aggregate fields from the child 


      C. Two tables as designed 
      D. Two tables, include in the Parent aggregates from the child 



      Answer: D 
      Sure (Transcender) 


      109.You are designing a logical model for database that will be used by an 

      employment agency. The agency accepts applications from individuals who 
      are looking for the jobs. In an attempt to increase their chances of being 

      hired, some applicants may use multiple aliases & submit multiple 
      applications for different jobs. The employment agency's recruiters can 
      identify some of the aliases by aliases by address & phone number. Some 
      recruiters try to find suitable jobs for registered applicants. When a 
      match is found the applicant is provided with employer's contact 
      information. You take the following steps: 
      - Design a table named JOBS that lists all available positions job 
      descriptions & salaries. 
      - Design a table named RECRUITERS that lists all recruiters. 
      - Design a table named EMPLOYERS that list all employers. 
      - Create a FK on RecruiterID cl. to reference the RECRUITERS tb. 
      - Design a table named APPLICANTS that lists all applicants, their desired 

      positions & salaries, & description of each applicant's skills. 
      - Create FK on the RecruiterID cl. to reference the RECRUITERS tb. 
      - Design a table named ALIASES. Include in each record 2 applicant IDs 
      that are associated with 2 different names used by same applicant. 
      - Create a FK on each of theses columns to reference the AplicantID cl. in 

      the APPLICANTS tb. 
      Which results? 
      A. Each applicant can be matched to all employers who offer suitable 
      positions 
      B. Each available position can be associated with employer 
      C. Each applicant can be associated with all corresponding aliases 
      D. The data does not contain redundant data 
      E. All applicants & employers whom a recruiter has registered can be 
      identified 



      Answers: C, D, E 
      Sure 100% -- Transcender B #15

      Gabio_final:
      A: EMPLOYERS : contain only employers; its no link to JOBS
      B: see A


      110.You are designing an INVENTORY database application for a national 
      automobile sales registry. This new database application will keep track 
      of automobiles available at a participating dealerships, and will allow 
      each dealership to sell automobiles from the inventories of other 
      dealerships. Many makes and models of automobiles will be shown from each 
      dealership. 
      - You want to be able to track information about each automobile. 
      - You want to normalize your database. Which table should be included in 
      database application? 
      (Choose all that apply) (Choose Two) 
      A. A table containing the list of all dealership along with the address 
      and identification number for each dealership. 
      B. A table containing contact information for each automobile Manufacturer 

      along with the name of each Model manufactured by each Manufacturer. 
      C. A table containing the name & address of each dealership along with 
      automobile information. 
      D. A table containing an identification number for each automobile, the 
      owning dealership's identification number and other information specific 
      to each automobile 



      Answer: A, D 


      112.You automate the backup and recovery process for your database 
      application. After the database is restored, you discover that the queries 

      that use the FREETEXT & CONTAINS keywords no longer return the expected 
      rows. What should you do? 
      A. After query to use the LIKE keyword instead of the FREETEXT & CONTAINS 
      keyword 
      B. After query to use the FREETEXTTABLE & CONTAINSTABLE keyword instead of 

      the FREETEXT & CONTAINS keyword 
      C. Add the database FULLTEXT catalog to backup job & recovery job 
      D. Add the job to the restoration process to recreate and populate the 
      full-text catalog 



      Answer: D 


      113.You database includes a table named Product. The Product table 
      currently 
      has clustered index on the primary key of the product_id column. There is 
      also a nonclusted index on the Description column. You are experiencing 
      very poor response times when querying the Product table. Most of the 
      queries against the table include search arguments on the description and 
      product_type columns. Because there are many values in the Size columns 
      for any product, query result set usually contain between 200 and 800 
      rows. You want to improve the response times when querying the product 
      table. What should you do? 
      A. Use SQL Server Enterprise Manager to create nonclustered index on each 
      column being referenced by each SELECT statement 
      B. Use SQL Server Enterprise Manager to generate stored procedures for the 

      product table 
      C. Use the Index Tuning Wizard to identify and build any missing indexes 
      D. Use SQL Server Profiler to capture performance statistics of queries 
      against the product table 



      Answer: C 
      Gabio_final:
      The Index Tuning Wizard allows you to select and create an optimal set of 
      indexes and statistics for a Microsoft SQL Server database without 
      requiring an expert understanding of the structure of the database, the 
      workload, or the internals of SQL Server.


      114.You have a table with 10,000 rows, it grows by 10 % every year. There 
      is 
      a nightly batch that updates the table with data. The following morning, 
      users are complaining the the queries they run are really slow the first 
      time they run but they speed up the second time. What can you do to speed 
      up the performance of queries? 
      A. Run sp_createstats as part of the nightly batch job 
      B. Run sp_updatestats as part of the nightly batch process 
      C. Set the auto update statistics Database option to be true 
      D. Can't remember but some wierd Database option 



      Answer: B 


      115.You have an application that captures real time stock market 
      information 
      and generates trending reports. In the past, the reports were generated 
      after the stock markets closed. The reports now need to be generated on 
      demand during trading hours. What can you do so that the reports can be 
      generated without affecting the rest of the application? (Choose two) 
      A. Program the application to issue the following command before 
      generating a report: set transaction isolation level read uncommited 
      B. Program the application to issue the following command before 
      generating a report: set transaction isolation level serializable 
      C. Require the application to include the NOLOCK table hint while 
      generating a report 
      D. Require the application to include TABLOCKX while generating a report 
      E. On the stock transaction tables, create triggers that update summary 
      tables instead of performing a data analysis each time a report generated 
      F. Declare global scrollable cursors on the stock transaction tables 



      Answer: A, E 
      Gabio_final: maybe C?


      116.Your database includes a SalesPerson table that tracks various data, 
      including the sales goal and actual sales for individual salespeople. The 
      sales manager wants a report containing a list of the five least 
      productive salespeople, along with their goal and their actual sales 
      production. You will use an ascending sort to order the information in the 

      report by actual sales production. What should you do to produce this 
      report? 
      A. Issue a set Rowcount 5 statement before issuing a select statement 
      against the sales person table 
      B. Include a top 5 clause in the select list against the salesperson table 


      C. Issue a set query_governor_cost_limit 5 statement before issuing a 
      select statement against the salesperson table 
      D. Cost the rows returned by using a *** count(*) < 5 


      Answer: B 
      Microsoft prefers TOP rather then SET ROWCOUNT. 


      117.You add new functionality to an existing database application. After 
      the 
      upgrade, users of the application report slower performance. The new 
      functionality executes multiples stored procedure and dynamic SQL 
      statements. You want to identify the specific queries that are 
      encountering excessively long execution time. What you should do? 
      A. Run the SQL Server Tuning Wizard 
      B. Create a SQL Profiler trace that uses minimum execution time and 
      application 
      C. Use the Current activity Dialog Box of the SQL Server EM to list 
      current user tasks and objects blocks 
      D. Use SP_Monitor stored procedure to monitor the CPU Busy and to-Busy 
      Columns before and after adding new functionality 


      Answer: B 


      118.Evaluate this statement: USE sales SELECT manufacturer_id, 
      SUM(unit_price) 
      FROM inventory GROUP BY manufacturer_id If the inventory table contains 
      350 unit_price values and there are 125 different manufacturers, how many 
      unit_price values will be displayed? 
      A. one 
      B. 350 
      C. one for each record in the inventory table 
      D. one for each manufacturer_id value in the result set 


      Answer: D 
      Abdul: The result set will include one unit_price value for each unique 
      manufacturer_id value in the inventory table. 


      119.How can you change a stored procedure without applying all the current 

      permissions on it? 


      Answer: Use- Alter- statement 


      120.Your table of medical information includes a table named Experiment 
      that 
      is defined as follows: 
      CREATE TABLE Experiment ( ExperimentID char(32), Description Text, Status 
      Integer, Results Text) 
      You write the following: 
      SELECT * FROM Experiment WHERE CONTAINS (Description, 'angina') 
      You are certain that there are matching rows, but you receive an empty 
      result set when you try to run the query. What should you do? (Choose two) 


      A. Ensure that there is a nonunique index on Description column of 
      Experiment table 
      B. Ensure that there is a clustered index on Results column of Experiment 
      table 
      C. Create FULLTEXT catalog that includes the Experiment table 
      D. Create a scheduled job to populate the FULLTEXT catalog 


      Answer: C, D 


      121.What does the following do? disk resize name='payroll_log', size=15560 

      A. Size increased by 15MB 
      B. Increased to 15MB 
      C. Increased to 30MB 
      D. Error message 


      Answer: D 
      Abdul: DISK RESIZE statement does not alter the size of the database. 
      Instead, use ALTER DATABASE. 
      Frodo: Gabio_final accidently gives B - it must be a typo.

      122. 
      Want to inform users of the correct syntax if they don't enter all the 
      necessary parameters. How? 
      A. Batch 
      B. Rule 
      C. Stored procedure 
      D. Trigger 


      Answer: D 
      Abdul: The primary use for triggers is to enforce business rules on the 
      data in a table. If your application requires (or can benefit from) 
      customized messages and more complex error handling, you must use a 
      trigger. 
      Frodo: The word Parameter makes me think. Parameters are used in Stored 
      Procedures (and Queries) not in Triggers. Frodo still thinks D is correct. 

      Something must be missing/wrong in the question.


      123.You have an inventory db with the log on a separate device. Want to 
      increase log by 20MB. You have created device invlogdev2 of 40MB. Is the 
      following correct? 
      alter database inventory on invlogdev2=20 exec sp_logdevice inventory, 
      invlogdev2 
      A. Yes 
      B. No, only increased by 10MB 
      C. No, trans. log moved instead of increased 
      D. No, need DBCC CHECKALLOC to determine errors 
      E. Cannot determine based on the info 


      Answer: A 
      Gabio_final: 
      The ALTER DATABASE statement is used to increase the size of a database or 

      the transaction log for a database. The sp_logdevice stored procedure is 
      used to specify that space added to a device should be allocated to the 
      transaction log for a database.
      sp_logdevice put syslogs (contains the transaction log) on a separate 
      database device. To add another log segment to a database with an existing 

      log segment, it was necessary to execute DISK INIT followed by 
      sp_logdevice. Consider removing all references to sp_logdevice and 
      replacing with references to CREATE DATABASE. Pre-SQL Server 7.0 scripts 
      using the LOG ON clause of CREATE DATABASE will work as expected. Scripts 
      without the LOG ON clause of CREATE DATABASE will have a log file 
      generated automatically.

      Disagreement in the following question!!! 

      124.Exhibit: 
      Table1 (Company): CompanyID(PK), CompanyName 
      Table2 (Employee): EmployeeID(PK), FirstName, LastName,K 
      SocialInsurance#, CompanyNameK 
      Asked for optimizing data design (something like that) 
      A. Add EmployeeID to Company Table 
      B. Remove SocialInsurance# from Employee table 
      C. Remove CompanyName from Employee table 
      D. Remove CompanyName from Company table 


      Answer:
      Frodo: C
      Abdul, Florian, Gabio_final: A
      Frodo: 
      C removes redundancy.
      What good is A? The idea must be to make a foreign-key constraint on 
      EmployeeID, from Company to Employee. But that constraint would imply that 

      every company has exactly one Employee. Not good.


      125.You have a Decision Support System (DSS) database that allows users to 

      create and submit their own ad hoc queries against any of the DSS tables. 
      The users report that the response times for some queries are too long. 
      Response times for other queries are acceptable. What should you do to 
      identify long running queries? 
      A. SQL Server Enterprise Manager 
      B. SQL Server Profiler 
      C. SQL Server Query Analyser 
      D. Microsoft Windows NT Performance Monitor 


      Answer: B 


      126.You have a table with a clustered primary key. New records are added 
      by a 
      batch job that runs at night. The table grows by 20 percent per year. 
      During the day, the table is used frequently for queries. Queries return 
      ranges of rows based on the primary key. Response times for queries have 
      become worse over time. You run the DBCC SHOWCONTIG statement. The 
      statement provides the following output: 
      Pages Scanned 354 
      Extents Scanned 49 
      Extent Switches 253 
      Avg. pages per extent 7.2 
      Scan Density 17.79% [45:94] 
      Extent Scan Fragmentation 82.21% 
      Avg. Bytes Free per Page 485.2 
      Avg. Page Density (full) 94.01% 
      How to improve the query performance? 
      A. Update statistics on clustered index 
      B. Change the row size to fit efficiently on a page 
      C. Rebuild clustered index with fill factor 100 
      D. Rebuild clustered index with fill factor 25 
      E. Rebuild clustered index with fill factor 75 


      Answer: E
      Gabio_final: because the table is used frequently for queries 
      Frodo: Weird. Sounds like Gabio_final is arguing for D but he choose D 
      anyway.


      127.You are designing a data model that will record standardized student 
      assessments for a school district. The school district wants assessments 
      to be completed online. The school district also wants each student's 
      responses and scores to be stored immediately in the database. Every year, 

      each student will complete a behavior assessment and an academic 
      assessment. The school district needs to prevent changes to assessment 
      responses after the assessment is complete, but students should be allowed 

      to change their responses during the course of the assessment. The school 
      district wants to require each student to answer all items on each 
      assessment. When the student indicates completion, the score for the 
      entire assessment must be computed and recorded. You design a student 
      table and an assessment table. 
      You want to accomplish the following goals: 
      Ensure that there is no redundant or derived data. 
      Ensure that an assessment response cannot be change after the assessment 
      is complete and the score is entered. 
      Ensure that all assessment item have responses when the assessment is 
      completed and the score is entered. 
      Ensure that an assessment score is computed and stored when the assessment 

      is completed and the score is entered . 
      You take the following steps: 
      Add an INSERT trigger on the StudentBehavior table computing a value for 
      the AssessmentScore column when a row is inserted. 
      Add an INSERT trigger on the StudentAcademic table computing value for the 

      AssessmentScore column when a row is inserted. 
      Add an "UPDATE" trigger on the StudentBehavior table rejecting all updates 

      to responses if the value in AssessmentScore column is not null 
      Add an "UPDATE" trigger on the StudentAcademic table rejecting all updates 

      to responses if the value in AssessmentScore column is not null 
      What does this accomplish? 
      a) There is no redundant or derived data 
      b) An assessment response cannot be changed after the assessment is 
      complete and the score in entered 
      c) All assessment items have a response when the assessment is complete 
      and the score in entered 
      d) An assessment score is computed and stored when the assessment is 
      complete and the score in entered 


      Answer: B, C, D 


      128.You would like to create a view of a table for users to input data. 
      The 
      view must restrict the maximum entry to 100,000. How can this be done 
      without the use of a trigger? 


      Answer: Create -and -bind -a -rule 
      Gabio_final:
      A database object bound to a column or user-defined data type that 
      specifies what data can be entered in that column. Every time a user 
      enters or modifies a value (with an INSERT or UPDATE statement), SQL 
      Server checks it against the most recent rule bound to the specified 
      column, for example, for limit checking or list checking. Data entered 
      before the creation and binding of a rule is not checked.
      Frodo: 
      If there is any option which looks like:
      Create VIEW ....WHERE X <= 100000 WITH CHECK OPTION 
      Then Frodo would choose that option.

      129. (Frodo: A better version of this question earlier in the dump.)
      You have an OLTP server. You want to create an Decision Support database 
      without loading any more on the OLTP database. How could you do this? 


      Answer: Create-a -second- database- on- another-server- and -use 
      -replication. 


      130.You have to transfer data into a table with identity from a text file 
      that 
      has a combined person name. Your table has separate lastname, firstname 
      field. How would you do this? 


      Answer: By- using- DTS. 

      Disagreement in the following question!!! 

      131.Your Sales database is accessed by a Microsoft Visual Basic 
      client/server 
      application. The application is not using the Microsoft Windows NT 
      Authentication security model. The Orders table has a primary key 
      consisting of an identity column named OrderID. When a customer cancels an 

      order, the row for that order is deleted from the Orders table. Sometimes, 

      a customer who cancelled an order will ask that order to be reinstated. 
      You must write a stored procedure that will insert the order back into the 

      database with the original OrderID. 
      You write the following stored procedure to be called by the Visual Basic 
      application: 
      CREATE PROCEDURE InsertReinstatedOrder @OrderID Int, @SalesPersonID Int, 
      @RegionID Int, @OrderDate Datetime, @OrderAmount Money, @CustomerID Int AS 

      SET IDENTITY_INSERT Orders ON INSERT Orders (OrderID, SalesPersonID, 
      RegionID, OrderDate, OrderAmount, CustomerID) VALUES (@OrderID, 
      @SalesPersonID, @RegionID, @OrderDate, @OrderAmount, @CustomerID) SET 
      IDENTITY_INSERT Orders OFF RETURN GO 
      GRANT EXECUTE ON InsertReinstateOrder TO Sales GO 
      You test the procedure and it is implemented with Visual Basic 
      application. A user named Andrew is assigned to the Sales role. Andrew 
      reports that he is receiving an error message indicating that he is having 

      a permission problem with the procedure. What must you do to solve the 
      problem? 
      A. Give user exclusive rights 
      B. Grant access to NT user 
      C. Grant access to NT Group 
      D. Give user db-owner 


      Answer: 
      Abdul, Florian, Gabio_final: A
      Gabio_final, Frodo: D 
      Abdul:
      Permissions to use the statement(s) within the EXECUTE string are checked 
      at the time EXECUTE is encountered, even if the EXECUTE statement is 
      included within a stored procedure. When a stored procedure is run that 
      executes a string, permissions are checked in the context of the user who 
      executes the procedure, not in the context of the user who created the 
      procedure.
      Frodo: Gabio_final gives A in one place in his dump. At another he argues 
      for D:
      Gabio_final:
      A question on permissions in a stored procedure - if you read it 
      carefully, you'll realize that the problem is that the stored procedure 
      gives permissions with GRANT EXECUTE at run time at the end of the sp, so 
      users can't run it. 
      Editing the sp is not an option - add user to db_owner, add user logon to 
      sql server, stuff like that.


      132.Your database includes an Orders table that is defined as follows: 
      CREATE TABLE Orders ( OrderID Int IDENTITY (1,1) NOT NULL, SalesPersonID 
      Int NOT NULL, RegionID Int NOT NULL, OrderDate Datetime NOT NULL, 
      OrderAmount Int NOT NULL) 
      You have written a stored procedure name GetOrders that reports on the 
      Orders table. The stored procedure currently creates a list of orders in 
      order by SalesPersonID. You must change the stored procedure to produce a 
      list of orders in order first by RegionID and then by SalesPersonID. 
      Permissions have been granted on the stored procedure, and you do not want 

      to have to grant them again. How must you change the stored procedure? 
      A. Something with ALTER, but ORDER missing or incorrect.
      B. 
      ALTER PROCEDURE GetOrders As 
      SELECT SalesPersonID, RegionID, OrderID, OrderDate, OrderAmount 
      FROM Orders 
      ORDER BY RegionID, SalesPersonID
      C. Something with DROP procedure, and then add it.
      D. Something other with DROP procedure, and then add it.

      Answer: B


      133.You are building a decision support system (DSS) database for your 
      company. The new database is expected to include information from existing 

      data sources that are based on Microsoft Excel, dbase III, Microsoft 
      Access and Oracle. You want to use SQL Server Agent to run a scheduled job 

      to extract information from the existing data sources into a centralized 
      database on SQL Server 7. You do not want to perform any additional 
      programming outside the SQL environment. How must you extract the 
      information from the exisiting data sources? 


      Answer: Create- a- Data- Transformation- Services -(DTS) package- to 
      -import-data 
      from each data source. 


      134.You are working on a data conversion effort for a Sales database. You 
      have 
      successfully extracted all of the existing customer data into a 
      tab-delimited flat file. The new Customer table is defined as follows: 
      CREATE TABLE Customer ( Id Int IDENTITY NOT NULL, Lastname Varchar(50) NOT 

      NULL, Firstname Varchar(50) NOT NULL, Phone Varchar(15) NOT NULL, Email 
      Varchar(255) NULL) 
      You need to populate this new table with the customer information that 
      currently exists in a tab-deliminated flat file with the following format: 

      Name Phone E-mail Adam Barr 555-555-1098 abarr@adatum.om Karen Berge 
      555-555-7868 kberg@woodgrovebank.com Amy Jones 555-555-0192 
      ajones@treyresearch.com 
      How can you transfer the data to accurately populate the Customer table? 


      Answer: Import- the -data-by- using -Data-Transformation -Services -with 
      -the 
      Transform information as it's copied to the destination option button 
      selected. 
      Gabio_final:
      The DTS Import and DTS Export wizards allow you to map a source column to 
      a destination column as the data is copied. 
      You can stipulate how a source is mapped to a destination by: 
       Specifying which source columns to copy. 
       Specifying the destination column where the data will be copied. 
       Changing the data type of the data, if a valid data conversion is 
      applicable. 

      Disagreement in the following question!!! 

      135.You are designing a dataase that will be used to store information 
      about 
      tasks assigned to various employees. Each task is assigned to only one 
      employee.The database contains a table named Task that is modeled as shown 

      in the exhibit. You want to use a PRIMARY KEY constraint to uniquely 
      identify each row in the Task table. 
      On which column or columns should you define the PRIMARY KEY constaint? 
      (Choose all that apply)
      A. TaskNo
      B. EmployeeNo
      C. Status
      D. ??


      Answer:
      Abdul, Gabio_final, Frodo: A. 
      Florian: A, B
      Gabio_final: Depends on context, if, an employee cannot be assigned to 
      many task then only taskno
      Abdul:
      Consider these tables w/data 
      Task emp 
      1 a 
      2 b 
      3 c 
      One task to One employee. To avoid a task being given to two, allow the 
      task number to be entered just once.(unique) 
      If you use a composite key of (A,B) then 1A is unique as is 1B, the task 
      has been given to two emp. 


      136.You have a database to kep track of sales information. Many of your 
      queries calculate the total sales amount for a particular salesperson. You 

      must write a nested procedure that will pass a parameter back to the 
      calling procedure. The parameter must contain the total sales amount from 
      the table that is defined as follows: 
      CREATE TABLE SalesInformation ( SalesInformationID Int IDENTITY(1,1) NOT 
      NULL PRIMARY KEY NONCLUSTERED, SalesPersonID Int NOT NULL, RegionID Int 
      NOT NULL, ReceiptID Int NOT NULL, SalesAmount Money NOT NULL) 
      Which statement can you execute to create the procedure? 
      A.
      CREATE PROCEDURE GetSalesPersonData @SalesPersonID Int, @RegionID Int, 
      @SalesAmount Money OUTPUT AS SELECT 
      @SalesAmount = SUM(SalesAmount) 
      FROM SalesInformation 
      WHERE @SalesPersonID = SalesPersonID 
      B.
      CREATE PROCEDURE GetSalesPersonData @SalesPersonID Int, @RegionID Int, 
      @SalesAmount INT=OUTPUT AS SELECT 
      @SalesAmount = SUM(SalesAmount) 
      FROM SalesInformation 
      WHERE @SalesPersonID = SalesPersonID
      C. 
      CREATE PROCEDURE GetSalesPersonData @SalesPersonID Int, @RegionID Int, 
      @SalesAmount Money AS 
      SELECT 
      @SalesAmount = SUM(SalesAmount) 
      FROM SalesInformation 
      WHERE @SalesPersonID = SalesPersonID
      D. Some other alternative with OUTPUT missing.

      Answer: A
      Gabio_final: Use OUTPUT 
      Frodo: Hmm, at the exam, I realized that two of the choices got OUTPUT in 
      them. 
      On B: It got INT = OUTPUT. Clearly wrong. Data type should be money. Can 
      you even use the character = this way? I think it is a syntax error.


      137.You are implementing a logical data model for an online processing 
      (OLTP) 
      application. One entity from the logical model in 3rd normal form 
      currently has 10 attribute. One attribute is the primary key. Six of the 
      attribute are foreign key referencing into six other entities. The last 
      three attributes representing columns that holds Numeric values. How 
      should this entity from the logical model be implemented? 
      A. Create table by denormalizing the enity. Add the information from the 
      six foreign key referencing as additional column of the table. 
      B. Create two tables by denormalizing the entity add three primary key & 
      three numeric values as column of the one table. At the primary key & the 
      information from the six foreign key as column of the other table. 
      C. Create the table as described in the logical data model. 
      D. Create a view that the six foreign keys referenced 


      Answer: C 

      138. (Frodo: This is136 with less details.)
      One question gave you examples of stored procedures. You need this stored 
      procedure to pass a value to a higher-level stored procedure. 


      Answer: Use -OUTPUT 
      Abdul:
      only one has OUTPUT word among the answers

      Here's an example of how to use output parameters to create the procedure:
      create proc getcount @returncount int OUTPUT as
      select @returncount = count (*) from categories
      return (0)
      to run the procedure:

      declare @catch int
      exec getcount @catch OUTPUT
      print @catch 
      Frodo: Actually, two alternatives got OUTPUT in them.


      139.A question asking how to implement cascade deleting: 
      A. Create a trigger and do not use referential integrity 
      B. Create a trigger and use referential integrity 
      C. I could not believe that Microsoft does not have delete cascade option 


      Answer: A 
      FOREIGN KEY constraints do not support cascading activities. 


      140.You have two servers called New York and Chicago. How to display sales 

      result from both server in a single view? 


      Answer: 
      CREATE VIEW ( a, b, c ) 
      AS SELECT a, b, c FROM Chicago UNION ALL 
      SELECT a, b, c FROM NewYork.Database..Sales 

      Abdul: 
      The other choices showed bizarre combinations. If you know views and 
      server references very well, you wouldn't be fooled.
      Gabio_final: 
      ( a, b, c ): the name to be used for a column in a view. Naming a column 
      in CREATE VIEW is necessary only when a column is derived from an 
      arithmetic expression, a function, or a constant, when two or more columns 

      may otherwise have the same name (usually because of a join), or when a 
      column in a view is given a name different from that of the column from 
      which it is derived. Column names can also be assigned in the SELECT 
      statement. If column is not specified, the view columns acquire the same 
      names as the columns in the SELECT statement. 

      R.R from Israel 

Submit your own braindump through our submission form



Have a question about a braindump? Don't understand why the answer is the 
answer? Think the answer might be wrong? Ask it on the discussion forum

If you see any braindumps with copyrighted information please email the 
webmaster



