...
BugZero found this defect 2819 days ago.
This started occurring sometime after I upgraded clang to 4.0. That could be the cause, or a coincidence. I can't seem to get mongo built with gcc anymore. The cause of the segfault is in this test: https://github.com/mongodb/mongo-python-driver/blob/3.4.0/test/test_collection.py#L1621 You can run it like this: python -m unittest -v test.test_collection.TestCollection.test_group Log output with increased verbosity: 2017-04-16T18:00:26.920-0700 D COMMAND [conn8] run command pymongo_test.$cmd { drop: "test", writeConcern: {} } 2017-04-16T18:00:26.920-0700 I COMMAND [conn8] CMD: drop pymongo_test.test 2017-04-16T18:00:26.920-0700 D STORAGE [conn8] dropCollection: pymongo_test.test 2017-04-16T18:00:26.920-0700 D INDEX [conn8] dropAllIndexes dropping: { v: 2, key: { _id: 1 }, name: "_id_", ns: "pymongo_test.test" } 2017-04-16T18:00:26.920-0700 D STORAGE [conn8] pymongo_test.test: clearing plan cache - collection info cache reset 2017-04-16T18:00:26.920-0700 D STORAGE [conn8] dropIndexes done 2017-04-16T18:00:26.920-0700 D STORAGE [conn8] deleting metadata for pymongo_test.test @ RecordId(107) 2017-04-16T18:00:26.920-0700 D STORAGE [conn8] WT drop of table:index-79--369124514706915650 res 16 2017-04-16T18:00:26.920-0700 D STORAGE [conn8] ~WiredTigerRecordStore for: pymongo_test.test 2017-04-16T18:00:26.921-0700 D STORAGE [conn8] WT drop of table:collection-78--369124514706915650 res 16 2017-04-16T18:00:26.921-0700 D REPL [conn8] Waiting for write concern. OpTime: { ts: Timestamp 0|0, t: -1 }, write concern: { w: 1, wtimeout: 0 } 2017-04-16T18:00:26.921-0700 I COMMAND [conn8] command pymongo_test.test command: drop { drop: "test", writeConcern: {} } numYields:0 reslen:80 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_query 0ms 2017-04-16T18:00:26.921-0700 D COMMAND [conn8] run command pymongo_test.$cmd { group: { $reduce: function (obj, prev) { prev.count++; }, ns: "test", cond: {}, key: {}, initial: { count: 0 } } } 2017-04-16T18:00:26.931-0700 D QUERY [js] SMScope 0x560845da8000 registered for op 747 2017-04-16T18:00:26.931-0700 D QUERY [conn8] Running query: query: {} sort: {} projection: {} 2017-04-16T18:00:26.931-0700 D QUERY [conn8] Collection pymongo_test.system.js does not exist. Using EOF plan: query: {} sort: {} projection: {} 2017-04-16T18:00:26.931-0700 I COMMAND [conn8] query pymongo_test.system.js query: { find: "system.js" } planSummary: EOF ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } 0ms 2017-04-16T18:00:26.931-0700 D QUERY [js] ImplScope 0x56084234e500 unregistered for op 747 2017-04-16T18:00:26.931-0700 I COMMAND [conn8] command pymongo_test.test command: group { group: { $reduce: function (obj, prev) { prev.count++; }, ns: "test", cond: {}, key: {}, initial: { count: 0 } } } planSummary: EOF keysExamined:0 docsExamined:0 numYields:0 reslen:79 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } protocol:op_query 10ms 2017-04-16T18:00:26.931-0700 D COMMAND [conn8] run command pymongo_test.$cmd { insert: "test", ordered: true, documents: 3 } 2017-04-16T18:00:26.931-0700 D STORAGE [conn8] create collection pymongo_test.test {} 2017-04-16T18:00:26.931-0700 D STORAGE [conn8] stored meta data for pymongo_test.test @ RecordId(108) 2017-04-16T18:00:26.931-0700 D STORAGE [conn8] WiredTigerKVEngine::createRecordStore uri: table:collection-80--369124514706915650 config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) 2017-04-16T18:00:26.960-0700 D STORAGE [conn8] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-80--369124514706915650 ok range 1 -> 1 current: 1 2017-04-16T18:00:26.960-0700 D STORAGE [conn8] pymongo_test.test: clearing plan cache - collection info cache reset 2017-04-16T18:00:26.960-0700 D STORAGE [conn8] WiredTigerKVEngine::createSortedDataInterface ident: index-81--369124514706915650 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=8,infoObj={ "v" : 2, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "pymongo_test.test" }), 2017-04-16T18:00:26.960-0700 D STORAGE [conn8] create uri: table:index-81--369124514706915650 config: type=file,internal_page_max=16k,leaf_page_max=16k,checksum=on,prefix_compression=true,block_compressor=,,,,key_format=u,value_format=u,app_metadata=(formatVersion=8,infoObj={ "v" : 2, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "pymongo_test.test" }), 2017-04-16T18:00:26.980-0700 D STORAGE [conn8] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-81--369124514706915650 ok range 6 -> 8 current: 8 2017-04-16T18:00:26.980-0700 D STORAGE [conn8] pymongo_test.test: clearing plan cache - collection info cache reset 2017-04-16T18:00:26.980-0700 D INDEX [conn8] marking index _id_ as ready in snapshot id 2189 2017-04-16T18:00:26.980-0700 D REPL [conn8] Waiting for write concern. OpTime: { ts: Timestamp 0|0, t: -1 }, write concern: { w: 1, wtimeout: 0 } 2017-04-16T18:00:26.980-0700 I COMMAND [conn8] command pymongo_test.test command: insert { insert: "test", ordered: true, documents: 3 } ninserted:3 keysInserted:3 numYields:0 reslen:44 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { w: 2, W: 1 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 48ms 2017-04-16T18:00:26.980-0700 D COMMAND [conn8] run command pymongo_test.$cmd { group: { $reduce: function (obj, prev) { prev.count++; }, ns: "test", cond: {}, key: {}, initial: { count: 0 } } } 2017-04-16T18:00:26.980-0700 D QUERY [conn8] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2017-04-16T18:00:26.980-0700 D QUERY [js] SMScope 0x560845da8000 registered for op 749 2017-04-16T18:00:26.981-0700 D QUERY [js] ImplScope 0x56084234e500 unregistered for op 749 2017-04-16T18:00:26.981-0700 I COMMAND [conn8] command pymongo_test.test command: group { group: { $reduce: function (obj, prev) { prev.count++; }, ns: "test", cond: {}, key: {}, initial: { count: 0 } } } planSummary: COLLSCAN keysExamined:0 docsExamined:3 numYields:0 reslen:102 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2017-04-16T18:00:26.981-0700 D COMMAND [conn8] run command pymongo_test.$cmd { group: { $reduce: function (obj, prev) { prev.count++; }, ns: "test", cond: { a: { $gt: 1 } }, key: {}, initial: { count: 0 } } } 2017-04-16T18:00:26.981-0700 D QUERY [conn8] Only one plan is available; it will be run but will not be cached. query: { a: { $gt: 1 } } sort: {} projection: {}, planSummary: COLLSCAN 2017-04-16T18:00:26.981-0700 D QUERY [js] SMScope 0x560845da8000 registered for op 750 2017-04-16T18:00:26.981-0700 D QUERY [js] ImplScope 0x56084234e500 unregistered for op 750 2017-04-16T18:00:26.981-0700 I COMMAND [conn8] command pymongo_test.test command: group { group: { $reduce: function (obj, prev) { prev.count++; }, ns: "test", cond: { a: { $gt: 1 } }, key: {}, initial: { count: 0 } } } planSummary: COLLSCAN keysExamined:0 docsExamined:3 numYields:0 reslen:102 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2017-04-16T18:00:26.981-0700 D COMMAND [conn8] run command pymongo_test.$cmd { insert: "test", ordered: true, documents: [ { a: 2, b: 3, _id: ObjectId('58f413aafa5bd83bbc77c815') } ] } 2017-04-16T18:00:26.981-0700 D REPL [conn8] Waiting for write concern. OpTime: { ts: Timestamp 0|0, t: -1 }, write concern: { w: 1, wtimeout: 0 } 2017-04-16T18:00:26.981-0700 I COMMAND [conn8] command pymongo_test.test command: insert { insert: "test", ordered: true, documents: [ { a: 2, b: 3, _id: ObjectId('58f413aafa5bd83bbc77c815') } ] } ninserted:1 keysInserted:1 numYields:0 reslen:44 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_query 0ms 2017-04-16T18:00:26.982-0700 D COMMAND [conn8] run command pymongo_test.$cmd { group: { $reduce: function (obj, prev) { prev.count++; }, ns: "test", cond: {}, key: { a: 1 }, initial: { count: 0 } } } 2017-04-16T18:00:26.982-0700 D QUERY [conn8] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2017-04-16T18:00:26.982-0700 D QUERY [js] SMScope 0x560845da8000 registered for op 752 2017-04-16T18:00:26.982-0700 D QUERY [js] ImplScope 0x56084234e500 unregistered for op 752 2017-04-16T18:00:26.982-0700 I COMMAND [conn8] command pymongo_test.test command: group { group: { $reduce: function (obj, prev) { prev.count++; }, ns: "test", cond: {}, key: { a: 1 }, initial: { count: 0 } } } planSummary: COLLSCAN keysExamined:0 docsExamined:4 numYields:0 reslen:173 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2017-04-16T18:00:26.982-0700 D COMMAND [conn8] run command pymongo_test.$cmd { group: { finalize: function (obj) { obj.count++; }, $reduce: function (obj, prev) { prev.count++; }, initial: { count: 0 }, cond: {}, key: { a: 1 }, ns: "test" } } 2017-04-16T18:00:26.982-0700 D QUERY [conn8] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2017-04-16T18:00:26.982-0700 D QUERY [js] SMScope 0x560845da8000 registered for op 753 2017-04-16T18:00:26.983-0700 D QUERY [js] ImplScope 0x56084234e500 unregistered for op 753 2017-04-16T18:00:26.983-0700 I COMMAND [conn8] command pymongo_test.test command: group { group: { finalize: function (obj) { obj.count++; }, $reduce: function (obj, prev) { prev.count++; }, initial: { count: 0 }, cond: {}, key: { a: 1 }, ns: "test" } } planSummary: COLLSCAN keysExamined:0 docsExamined:4 numYields:0 reslen:173 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2017-04-16T18:00:26.983-0700 D COMMAND [conn8] run command pymongo_test.$cmd { group: { finalize: function (obj) { return obj.count; }, $reduce: function (obj, prev) { prev.count++; }, initial: { count: 0 }, cond: {}, key: { a: 1 }, ns: "test" } } 2017-04-16T18:00:26.983-0700 D QUERY [conn8] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2017-04-16T18:00:26.983-0700 D QUERY [js] SMScope 0x560845da8000 registered for op 754 2017-04-16T18:00:26.984-0700 D QUERY [js] ImplScope 0x56084234e500 unregistered for op 754 2017-04-16T18:00:26.984-0700 I COMMAND [conn8] command pymongo_test.test command: group { group: { finalize: function (obj) { return obj.count; }, $reduce: function (obj, prev) { prev.count++; }, initial: { count: 0 }, cond: {}, key: { a: 1 }, ns: "test" } } planSummary: COLLSCAN keysExamined:0 docsExamined:4 numYields:0 reslen:112 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2017-04-16T18:00:26.984-0700 D COMMAND [conn8] run command pymongo_test.$cmd { group: { finalize: function (obj) { return obj.count; }, $reduce: function (obj, prev) { prev.count++; }, initial: { count: 0 }, cond: {}, $keyf: function (obj) { if (obj.a == 2) { return {a: true} }; return {b: true}; }, ns: "test" } } 2017-04-16T18:00:26.984-0700 D QUERY [conn8] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2017-04-16T18:00:26.984-0700 D QUERY [js] SMScope 0x560845da8000 registered for op 755 2017-04-16T18:00:26.984-0700 D QUERY [js] ImplScope 0x56084234e500 unregistered for op 755 2017-04-16T18:00:26.985-0700 I COMMAND [conn8] command pymongo_test.test command: group { group: { finalize: function (obj) { return obj.count; }, $reduce: function (obj, prev) { prev.count++; }, initial: { count: 0 }, cond: {}, $keyf: function (obj) { if (obj.a == 2) { return {a: true} }; return {b: true}; }, ns: "test" } } planSummary: COLLSCAN keysExamined:0 docsExamined:4 numYields:0 reslen:101 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2017-04-16T18:00:26.985-0700 D COMMAND [conn8] run command pymongo_test.$cmd { group: { $reduce: function (obj, prev) { prev.count++; }, ns: "test", cond: {}, initial: { count: 0 } } } 2017-04-16T18:00:26.985-0700 D QUERY [conn8] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2017-04-16T18:00:26.985-0700 D QUERY [js] SMScope 0x560845da8000 registered for op 756 2017-04-16T18:00:26.985-0700 D QUERY [js] ImplScope 0x56084234e500 unregistered for op 756 2017-04-16T18:00:26.985-0700 I COMMAND [conn8] command pymongo_test.test command: group { group: { $reduce: function (obj, prev) { prev.count++; }, ns: "test", cond: {}, initial: { count: 0 } } } planSummary: COLLSCAN keysExamined:0 docsExamined:4 numYields:0 reslen:102 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2017-04-16T18:00:26.985-0700 D COMMAND [conn8] run command pymongo_test.$cmd { group: { $reduce: 5 ++ 5, ns: "test", cond: {}, key: {}, initial: {} } } 2017-04-16T18:00:26.985-0700 D QUERY [conn8] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2017-04-16T18:00:26.985-0700 D QUERY [js] SMScope 0x560845da8000 registered for op 757 2017-04-16T18:00:26.985-0700 E QUERY [js] SyntaxError: invalid increment operand @group reduce init:1:10 2017-04-16T18:00:26.985-0700 D - [js] User Assertion: 139:SyntaxError: invalid increment operand @group reduce init:1:10 src/mongo/scripting/mozjs/implscope.cpp 918 2017-04-16T18:00:26.986-0700 D - [conn8] User Assertion: 139:SyntaxError: invalid increment operand @group reduce init:1:10 src/mongo/scripting/mozjs/proxyscope.cpp 295 2017-04-16T18:00:26.986-0700 F - [conn8] Invalid access at address: 0x560845af9780 2017-04-16T18:00:26.987-0700 F - [conn8] Got signal: 11 (Segmentation fault). 0x56083f78d507 0x56083f78d045 0x56083fae4da8 0x7f102b6ccf9f 0x560845af9780 0x56084590edaf ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"56083E406000","o":"1387507","s":"_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE"},{"b":"56083E406000","o":"1387045"},{"b":"56083E406000","o":"16DEDA8"},{"b":"7F102B6BC000","o":"10F9F"},{"b":"0","o":"560845AF9780"},{"b":"0","o":"56084590EDAF"}],"processInfo":{ "mongodbVersion" : "3.5.5-206-g932c2f3455", "gitVersion" : "932c2f345598d8e1d283e8c2bb54fd8d0e11c853", "compiledModules" : [ "enterprise" ], "uname" : { "sysname" : "Linux", "release" : "4.10.9-gentoo", "version" : "#1 SMP PREEMPT Tue Apr 11 22:31:37 PDT 2017", "machine" : "x86_64" }, "somap" : [ { "b" : "56083E406000", "elfType" : 3, "buildId" : "FDB31ABC184A9B9C9778496241C88683BA9DBB9A" }, { "b" : "7FFEB58DF000", "path" : "linux-vdso.so.1", "elfType" : 3, "buildId" : "2B5CF8A7A3924038EDF8887C27AC2E1754D42DF0" }, { "b" : "7F102E00A000", "path" : "/usr/lib64/libnetsnmpmibs.so.30", "elfType" : 3 }, { "b" : "7F102DE06000", "path" : "/lib64/libdl.so.2", "elfType" : 3 }, { "b" : "7F102DBA2000", "path" : "/usr/lib64/libnetsnmpagent.so.30", "elfType" : 3 }, { "b" : "7F102D997000", "path" : "/lib64/libwrap.so.0", "elfType" : 3 }, { "b" : "7F102D6B3000", "path" : "/usr/lib64/libnetsnmp.so.30", "elfType" : 3 }, { "b" : "7F102D276000", "path" : "/usr/lib64/libcrypto.so.1.0.0", "elfType" : 3 }, { "b" : "7F102CF7B000", "path" : "/lib64/libm.so.6", "elfType" : 3 }, { "b" : "7F102CD30000", "path" : "/usr/lib64/libldap-2.4.so.2", "elfType" : 3 }, { "b" : "7F102CB21000", "path" : "/usr/lib64/liblber-2.4.so.2", "elfType" : 3 }, { "b" : "7F102C904000", "path" : "/usr/lib64/libsasl2.so.3", "elfType" : 3 }, { "b" : "7F102C6B9000", "path" : "/usr/lib64/libgssapi_krb5.so.2", "elfType" : 3 }, { "b" : "7F102C44E000", "path" : "/usr/lib64/libcurl.so.4", "elfType" : 3 }, { "b" : "7F102C1DF000", "path" : "/usr/lib64/libssl.so.1.0.0", "elfType" : 3 }, { "b" : "7F102BFD7000", "path" : "/lib64/librt.so.1", "elfType" : 3 }, { "b" : "7F102BD1E000", "path" : "/usr/lib64/libc++.so.1", "elfType" : 3 }, { "b" : "7F102BAF3000", "path" : "/usr/lib64/libc++abi.so.1", "elfType" : 3 }, { "b" : "7F102B8D8000", "path" : "/usr/lib64/libunwind.so.8", "elfType" : 3 }, { "b" : "7F102B6BC000", "path" : "/lib64/libpthread.so.0", "elfType" : 3 }, { "b" : "7F102B320000", "path" : "/lib64/libc.so.6", "elfType" : 3 }, { "b" : "7F102E496000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3 }, { "b" : "7F102B109000", "path" : "/lib64/libz.so.1", "elfType" : 3 }, { "b" : "7F102AEF2000", "path" : "/lib64/libresolv.so.2", "elfType" : 3 }, { "b" : "7F102AC19000", "path" : "/usr/lib64/libkrb5.so.3", "elfType" : 3 }, { "b" : "7F102A9E5000", "path" : "/usr/lib64/libk5crypto.so.3", "elfType" : 3 }, { "b" : "7F102A7E1000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3 }, { "b" : "7F102A5D5000", "path" : "/usr/lib64/libkrb5support.so.0", "elfType" : 3 }, { "b" : "7F102A3D1000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3 }, { "b" : "7F102A1B6000", "path" : "/usr/lib64/sasl2/libotp.so", "elfType" : 3 }, { "b" : "7F1029FB0000", "path" : "/usr/lib64/sasl2/libcrammd5.so", "elfType" : 3 }, { "b" : "7F1029DAB000", "path" : "/usr/lib64/sasl2/libplain.so", "elfType" : 3 }, { "b" : "7F1029BA6000", "path" : "/usr/lib64/sasl2/liblogin.so", "elfType" : 3 }, { "b" : "7F102999F000", "path" : "/usr/lib64/sasl2/libsasldb.so", "elfType" : 3 }, { "b" : "7F1029791000", "path" : "/usr/lib64/libgdbm.so.4", "elfType" : 3 }, { "b" : "7F1029587000", "path" : "/usr/lib64/sasl2/libscram.so", "elfType" : 3 }, { "b" : "7F1029379000", "path" : "/usr/lib64/sasl2/libdigestmd5.so", "elfType" : 3 }, { "b" : "7F1029174000", "path" : "/usr/lib64/sasl2/libanonymous.so", "elfType" : 3 }, { "b" : "7F1028F6B000", "path" : "/usr/lib64/sasl2/libntlm.so", "elfType" : 3 } ] }} mongod(_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE+0x37) [0x56083f78d507] mongod(+0x1387045) [0x56083f78d045] mongod(+0x16DEDA8) [0x56083fae4da8] libpthread.so.0(+0x10F9F) [0x7f102b6ccf9f] ??? [0x560845af9780] ??? [0x56084590edaf] ----- END BACKTRACE ----- Segmentation fault (core dumped)
ian@10gen.com commented on Fri, 30 Jun 2017 17:02:42 +0000: behackett have you had a chance to repro this? acm commented on Tue, 6 Jun 2017 15:38:38 +0000: Bernie I'm going to assign this back to you while you try some of my suggestions. Since I can't seem to repro it, there isn't a whole lot more that I can do. If you are able to get somewhere with ASAN or valgrind and have some additional actionable data, feel free to re-assign to me and I'll give it a look. acm commented on Mon, 5 Jun 2017 14:31:28 +0000: behackett - Well, that stack trace is pretty unhelpful, I agree. The fact that the this pointer is nullptr is suggestive, but without additional stack it is hard to know how that happened. I'm also still unable to reproduce this in my zesty chroot with the current tip of master, using clang 4.0. Is this still reproducing for you? If so, my next suggestions would be to build under ASAN (add --sanitize=address --allocator=system to your scons invocation) or run mongod under valgrind --soname-synonyms=somalloc=NONE and see what happens? Perhaps the stack is getting badly enough corrupted that GDB can't see it, but ASAN or valgrind will catch it as it happens? behackett commented on Sat, 20 May 2017 03:08:12 +0000: Sadly the backtrace doesn't provide much information. I have debug symbols installed for glibc and libc++ and built mongod with --nostrip. $ gdb --args ./mongod --dbpath ~/data/db GNU gdb (Gentoo 7.12.1 vanilla) 7.12.1 Copyright (C) 2017 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-pc-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from ./mongod...done. warning: File "/home/durin/work/mongo/.gdbinit" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load". To enable execution of this file add add-auto-load-safe-path /home/durin/work/mongo/.gdbinit line to your configuration file "/home/durin/.gdbinit". To completely disable this security protection add set auto-load safe-path / line to your configuration file "/home/durin/.gdbinit". For more information about this security protection see the "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: info "(gdb)Auto-loading safe path" (gdb) run Starting program: /home/durin/work/mongo/mongod --dbpath /home/durin/data/db [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". [New Thread 0x7ffff28af700 (LWP 23717)] [New Thread 0x7ffff20ae700 (LWP 23718)] 2017-05-19T20:04:58.417-0700 I CONTROL [initandlisten] MongoDB starting : pid=23713 port=27017 dbpath=/home/durin/data/db 64-bit host=devbox 2017-05-19T20:04:58.417-0700 I CONTROL [initandlisten] db version v3.5.7-88-g0acaccdf81 2017-05-19T20:04:58.417-0700 I CONTROL [initandlisten] git version: 0acaccdf81a00222144253290f9cc3b4fd76e122 2017-05-19T20:04:58.417-0700 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2k 26 Jan 2017 2017-05-19T20:04:58.417-0700 I CONTROL [initandlisten] allocator: tcmalloc 2017-05-19T20:04:58.417-0700 I CONTROL [initandlisten] modules: enterprise 2017-05-19T20:04:58.417-0700 I CONTROL [initandlisten] build environment: 2017-05-19T20:04:58.417-0700 I CONTROL [initandlisten] distarch: x86_64 2017-05-19T20:04:58.417-0700 I CONTROL [initandlisten] target_arch: x86_64 2017-05-19T20:04:58.417-0700 I CONTROL [initandlisten] options: { storage: { dbPath: "/home/durin/data/db" } } 2017-05-19T20:04:58.417-0700 W - [initandlisten] Detected unclean shutdown - /home/durin/data/db/mongod.lock is not empty. 2017-05-19T20:04:58.433-0700 I - [initandlisten] Detected data files in /home/durin/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2017-05-19T20:04:58.433-0700 W STORAGE [initandlisten] Recovering data from the last clean checkpoint. 2017-05-19T20:04:58.433-0700 I STORAGE [initandlisten] 2017-05-19T20:04:58.433-0700 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2017-05-19T20:04:58.433-0700 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem 2017-05-19T20:04:58.433-0700 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7468M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress), 2017-05-19T20:04:58.567-0700 I STORAGE [initandlisten] WiredTiger message [1495249498:567091][23713:0x7ffff7fb1d40], txn-recover: Main recovery loop: starting at 4/128 [New Thread 0x7ffff18ad700 (LWP 23719)] [New Thread 0x7ffff10ac700 (LWP 23720)] [New Thread 0x7ffff08ab700 (LWP 23721)] [New Thread 0x7ffff00aa700 (LWP 23722)] 2017-05-19T20:04:58.567-0700 I STORAGE [initandlisten] WiredTiger message [1495249498:567594][23713:0x7ffff7fb1d40], txn-recover: Recovering log 4 through 5 2017-05-19T20:04:58.642-0700 I STORAGE [initandlisten] WiredTiger message [1495249498:642845][23713:0x7ffff7fb1d40], file:index-1-526602474019102121.wt, txn-recover: Recovering log 5 through 5 [Thread 0x7ffff00aa700 (LWP 23722) exited] [New Thread 0x7ffff18ad700 (LWP 23723)] [Thread 0x7ffff08ab700 (LWP 23721) exited] [Thread 0x7ffff10ac700 (LWP 23720) exited] [Thread 0x7ffff18ad700 (LWP 23719) exited] [New Thread 0x7ffff10ac700 (LWP 23724)] [New Thread 0x7ffff08ab700 (LWP 23725)] [New Thread 0x7ffff00aa700 (LWP 23726)] [New Thread 0x7fffef8a9700 (LWP 23727)] [New Thread 0x7fffef0a8700 (LWP 23728)] [New Thread 0x7fffee8a7700 (LWP 23729)] [New Thread 0x7fffee0a6700 (LWP 23730)] [New Thread 0x7fffed8a5700 (LWP 23731)] [New Thread 0x7fffed0a4700 (LWP 23732)] 2017-05-19T20:04:59.637-0700 I STORAGE [initandlisten] dropping unused ident: collection-0--235991914205857805 2017-05-19T20:04:59.645-0700 I STORAGE [initandlisten] dropping unused ident: index-1--235991914205857805 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] ** NOTE: This is a development version (3.5.7-88-g0acaccdf81) of MongoDB. 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] ** Not recommended for production. 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost. 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server. 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] ** Start the server with --bind_ip to specify which IP 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] ** addressses it should serve responses from, or with --bind_ip_all to 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning. 2017-05-19T20:04:59.660-0700 I CONTROL [initandlisten] [New Thread 0x7fffec8a3700 (LWP 23733)] [New Thread 0x7fffec0a2700 (LWP 23734)] [Thread 0x7fffec8a3700 (LWP 23733) exited] [New Thread 0x7fffec8a3700 (LWP 23735)] 2017-05-19T20:04:59.663-0700 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/home/durin/data/db/diagnostic.data' [New Thread 0x7fffeb8a1700 (LWP 23736)] [New Thread 0x7fffeb0a0700 (LWP 23737)] [New Thread 0x7fffea89f700 (LWP 23738)] [New Thread 0x7fffea09e700 (LWP 23739)] [New Thread 0x7fffe989d700 (LWP 23740)] 2017-05-19T20:04:59.663-0700 I NETWORK [thread1] waiting for connections on port 27017 2017-05-19T20:05:01.988-0700 I NETWORK [thread1] connection accepted from 127.0.0.1:60240 #1 (1 connection now open) [New Thread 0x7ffff7fb0700 (LWP 23747)] 2017-05-19T20:05:01.988-0700 I NETWORK [conn1] received client metadata from 127.0.0.1:60240 conn: { driver: { name: "PyMongo", version: "3.5.0.dev0" }, os: { type: "Linux", name: "Gentoo Base System 2.3", architecture: "x86_64", version: "4.11.1-gentoo" }, platform: "CPython 2.7.13.final.0" } 2017-05-19T20:05:01.989-0700 I NETWORK [thread1] connection accepted from 127.0.0.1:60242 #2 (2 connections now open) [New Thread 0x7fffe909c700 (LWP 23748)] 2017-05-19T20:05:01.989-0700 I NETWORK [conn2] received client metadata from 127.0.0.1:60242 conn: { driver: { name: "PyMongo", version: "3.5.0.dev0" }, os: { type: "Linux", name: "Gentoo Base System 2.3", architecture: "x86_64", version: "4.11.1-gentoo" }, platform: "CPython 2.7.13.final.0" } 2017-05-19T20:05:01.990-0700 I NETWORK [thread1] connection accepted from 127.0.0.1:60244 #3 (3 connections now open) 2017-05-19T20:05:01.990-0700 I - [conn2] end connection 127.0.0.1:60242 (3 connections now open) 2017-05-19T20:05:01.990-0700 I - [conn1] end connection 127.0.0.1:60240 (3 connections now open) [New Thread 0x7fffe8f9b700 (LWP 23751)] [Thread 0x7fffe909c700 (LWP 23748) exited] [Thread 0x7ffff7fb0700 (LWP 23747) exited] 2017-05-19T20:05:01.991-0700 I NETWORK [conn3] received client metadata from 127.0.0.1:60244 conn: { driver: { name: "PyMongo", version: "3.5.0.dev0" }, os: { type: "Linux", name: "Gentoo Base System 2.3", architecture: "x86_64", version: "4.11.1-gentoo" }, platform: "CPython 2.7.13.final.0" } 2017-05-19T20:05:01.992-0700 I NETWORK [thread1] connection accepted from 127.0.0.1:60246 #4 (2 connections now open) [New Thread 0x7ffff7fb0700 (LWP 23752)] 2017-05-19T20:05:01.992-0700 I NETWORK [conn4] received client metadata from 127.0.0.1:60246 conn: { driver: { name: "PyMongo", version: "3.5.0.dev0" }, os: { type: "Linux", name: "Gentoo Base System 2.3", architecture: "x86_64", version: "4.11.1-gentoo" }, platform: "CPython 2.7.13.final.0" } 2017-05-19T20:05:01.995-0700 I COMMAND [conn4] CMD: drop pymongo_test.test [New Thread 0x7fffe8e9a700 (LWP 23753)] 2017-05-19T20:05:02.058-0700 E QUERY [js] SyntaxError: invalid increment operand @group reduce init:1:10 Thread 29 "conn4" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7ffff7fb0700 (LWP 23752)] 0x000055555c5806c0 in ?? () (gdb) bt #0 0x000055555c5806c0 in ?? () #1 0x00007ffff7fad3f0 in ?? () #2 0x0000555556438794 in mongo::GroupStage::doWork (this=0x0, out=0x55555c125e60) at src/mongo/db/exec/group.cpp:212 #3 0x000055555c0f2f60 in ?? () #4 0x0000000000000002 in ?? () #5 0x0000000000000000 in ?? () acm commented on Thu, 4 May 2017 20:15:21 +0000: behackett - I just tested out 3.4 as well, at e624205252429cc6b629dd3f03b90db957454bd6, and I was similarly unable to reproduce a crash. If you can still reproduce this, can you try running your mongod under GDB while running the test, to see if you can get a more complete backtrace? Can you also test again against the tip of 3.4 (or master) as appropriate, and see if you can still reproduce this issue? acm commented on Thu, 4 May 2017 20:04:08 +0000: behackett - I just built tip of master (f62ed9c43dbdcd659f622b6f95898a8e71e9db9e) on an Ubuntu Zesty chroot, where clang is 4.0: $(which clang) --version clang version 4.0.0-1ubuntu1 (tags/RELEASE_400/rc1) Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/bin I then built mongodb per your build instructions: buildscripts/scons.py CC=$(which clang) CXX=$(which clang++) -j12 --libc++ --ssl --disable-warnings-as-errors core $ ./mongod --dbpath ~/var/tmp 2017-05-04T20:01:56.771+0000 I CONTROL [initandlisten] MongoDB starting : pid=7087 port=27017 dbpath=/home/andrew/var/tmp 64-bit host=acm-workstation ... I cloned the mongo-python-driver repository at 3359f850197f9129ed4d6f81be8fa96da1a8ff23, and ran the tests: $ python -m unittest -v test.test_collection.TestCollection.test_group test_group (test.test_collection.TestCollection) ... ok ---------------------------------------------------------------------- Ran 1 test in 0.094s OK There were no server crashes. Can you try this again with server master? With 3.4.latest? behackett commented on Mon, 17 Apr 2017 05:11:31 +0000: Build command: scons CC=$(which clang) CXX=$(which clang++) -j24 --libc++ --ssl --disable-warnings-as-errors core behackett commented on Mon, 17 Apr 2017 05:09:29 +0000: $ clang --version clang version 4.0.0 (tags/RELEASE_400/final) Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/lib/llvm/4/bin Also libcxx 4.0.0. milkie commented on Mon, 17 Apr 2017 01:25:14 +0000: To confirm, you're using vanilla version 4.0.0 of clang, released 25 days ago? This will be interesting to debug. It might even be a compiler bug. behackett commented on Mon, 17 Apr 2017 01:21:00 +0000: Looking a bit more closely, it looks like the test that causes the segfault intentionally causes an error, testing that we raise the correct exception in PyMongo. For some reason MongoDB master is crashing rather than returning an error. The test is here: https://github.com/mongodb/mongo-python-driver/blob/3.4.0/test/test_collection.py#L1682-L1683