1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220
|
description: "poc-change-streams"
schemaVersion: "1.0"
createEntities:
# Entities for creating changeStreams
- client:
id: &client0 client0
useMultipleMongoses: false
observeEvents: [ commandStartedEvent ]
# Original tests do not observe getMore commands but only because event
# assertions ignore extra events. killCursors is explicitly ignored.
ignoreCommandMonitoringEvents: [ getMore, killCursors ]
- database:
id: &database0 database0
client: *client0
databaseName: &database0Name change-stream-tests
- collection:
id: &collection0 collection0
database: *database0
collectionName: &collection0Name test
# Entities for executing insert operations
- client:
id: &client1 client1
useMultipleMongoses: false
- database:
id: &database1 database1
client: *client1
databaseName: &database1Name change-stream-tests
- database:
id: &database2 database2
client: *client1
databaseName: &database2Name change-stream-tests-2
- collection:
id: &collection1 collection1
database: *database1
collectionName: &collection1Name test
- collection:
id: &collection2 collection2
database: *database1
collectionName: &collection2Name test2
- collection:
id: &collection3 collection3
database: *database2
collectionName: &collection3Name test
initialData:
- collectionName: *collection1Name
databaseName: *database1Name
documents: []
- collectionName: *collection2Name
databaseName: *database1Name
documents: []
- collectionName: *collection3Name
databaseName: *database2Name
documents: []
tests:
- description: "Executing a watch helper on a MongoClient results in notifications for changes to all collections in all databases in the cluster."
runOnRequirements:
- minServerVersion: "3.8.0"
topologies: [ replicaset ]
operations:
- name: createChangeStream
object: *client0
arguments:
pipeline: []
saveResultAsEntity: &changeStream0 changeStream0
- name: insertOne
object: *collection2
arguments:
document: { x: 1 }
- name: insertOne
object: *collection3
arguments:
document: { y: 1 }
- name: insertOne
object: *collection1
arguments:
document: { z: 1 }
- name: iterateUntilDocumentOrError
object: *changeStream0
expectResult:
operationType: insert
ns:
db: *database1Name
coll: *collection2Name
fullDocument:
_id: { $$type: objectId }
x: 1
- name: iterateUntilDocumentOrError
object: *changeStream0
expectResult:
operationType: insert
ns:
db: *database2Name
coll: *collection3Name
fullDocument:
# Original tests did not include _id, but matching now only permits
# extra keys for root-level documents.
_id: { $$type: objectId }
y: 1
- name: iterateUntilDocumentOrError
object: *changeStream0
expectResult:
operationType: insert
ns:
db: *database1Name
coll: *collection1Name
fullDocument:
_id: { $$type: objectId }
z: 1
expectEvents:
- client: *client0
events:
- commandStartedEvent:
command:
aggregate: 1
cursor: {}
pipeline:
- $changeStream:
allChangesForCluster: true
# Some drivers may send a default value for fullDocument
# or omit it entirely (see: SPEC-1350).
fullDocument: { $$unsetOrMatches: default }
commandName: aggregate
databaseName: admin
- description: "Test consecutive resume"
runOnRequirements:
- minServerVersion: "4.1.7"
topologies: [ replicaset ]
operations:
- name: failPoint
object: testRunner
arguments:
client: *client0
failPoint:
configureFailPoint: failCommand
mode: { times: 2 }
data:
failCommands: [ getMore ]
closeConnection: true
- name: createChangeStream
object: *collection0
arguments:
batchSize: 1
pipeline: []
saveResultAsEntity: *changeStream0
- name: insertOne
object: *collection1
arguments:
document: { x: 1 }
- name: insertOne
object: *collection1
arguments:
document: { x: 2 }
- name: insertOne
object: *collection1
arguments:
document: { x: 3 }
- name: iterateUntilDocumentOrError
object: *changeStream0
expectResult:
operationType: insert
ns:
db: *database1Name
coll: *collection1Name
fullDocument:
_id: { $$type: objectId }
x: 1
- name: iterateUntilDocumentOrError
object: *changeStream0
expectResult:
operationType: insert
ns:
db: *database1Name
coll: *collection1Name
fullDocument:
_id: { $$type: objectId }
x: 2
- name: iterateUntilDocumentOrError
object: *changeStream0
expectResult:
operationType: insert
ns:
db: *database1Name
coll: *collection1Name
fullDocument:
_id: { $$type: objectId }
x: 3
expectEvents:
- client: *client0
events:
- commandStartedEvent:
command:
aggregate: *collection1Name
cursor: { batchSize: 1 }
pipeline:
- $changeStream:
fullDocument: { $$unsetOrMatches: default }
commandName: aggregate
databaseName: *database1Name
# The original test only asserted the first command, since expected
# events were only an ordered subset. This file does ignore getMore
# commands but we must expect the subsequent aggregate commands, since
# each failed getMore will resume. While doing so we can also assert
# that those commands include a resume token.
- &resumingAggregate
commandStartedEvent:
command:
aggregate: *collection1Name
cursor: { batchSize: 1 }
pipeline:
- $changeStream:
fullDocument: { $$unsetOrMatches: default }
resumeAfter: { $$exists: true }
commandName: aggregate
databaseName: *database0Name
- *resumingAggregate
|