サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
掃除・片付け
testing.googleblog.com
Post a Comment The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments. TotT 96 GTAC 61 James Whittaker 42 Misko Hevery 32 Anthony Vallone 27 Code Health 25 Patrick Copeland 23 Jobs 18 Andrew Trenk 12 C++ 11 Patrik Höglund 8 JavaScript 7 Allen Hutchison 6 George Pirocanac 6 Zhanyong Wan 6 Harry Robinson 5 Java 5 Julian
Post a Comment The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments. TotT 91 GTAC 61 James Whittaker 42 Misko Hevery 32 Anthony Vallone 27 Code Health 23 Patrick Copeland 23 Jobs 18 Andrew Trenk 12 C++ 11 Patrik Höglund 8 JavaScript 7 Allen Hutchison 6 George Pirocanac 6 Zhanyong Wan 6 Harry Robinson 5 Java 5 Julian
Post a Comment The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments. TotT 93 GTAC 61 James Whittaker 42 Misko Hevery 32 Anthony Vallone 27 Code Health 24 Patrick Copeland 23 Jobs 18 Andrew Trenk 12 C++ 11 Patrik Höglund 8 JavaScript 7 Allen Hutchison 6 George Pirocanac 6 Zhanyong Wan 6 Harry Robinson 5 Java 5 Julian
I think it's worth underlining that coverage mainly tells you about code that has no tests: it doesn't tell you about the quality of testing for the code that's 'covered', especially if it's only line-coverage - branch/condition coverage is more informative there. ReplyDelete
TotT 95 GTAC 61 James Whittaker 42 Misko Hevery 32 Anthony Vallone 27 Code Health 25 Patrick Copeland 23 Jobs 18 Andrew Trenk 12 C++ 11 Patrik Höglund 8 JavaScript 7 Allen Hutchison 6 George Pirocanac 6 Zhanyong Wan 6 Harry Robinson 5 Java 5 Julian Harty 5 Alberto Savoia 4 Ben Yu 4 Erik Kuefler 4 Philip Zembrod 4 Shyam Seshadri 4 Adam Bender 3 Chrome 3 Dillon Bly 3 John Thomas 3 Lesley Katzen 3 Ma
There are two main reasons for flaky automated tests. 1) Poor Locator Strategy. Find a methodology that is testable before you have to depend upon it in your automated testing. I just posted a video on this topic a week ago, which shares why our team exclusively utilizes xPath for some of the most reliable locators you can build. Realize xPath has got a bad rap and is often demonstrated online in
What I would like to see is a breakdown of how many failures fall into which category. And of the above categories, while useful for root cause analysis and eventual fix, for sake of triaging results, and disposition of what to do if hitting such a failure, it seems that whether or not the failure is a true product failure is a HUGE difference from the other three. For the other three, the major r
Sorry, but I stopped reading after the first paragraph! If your end-to-end tests, are often slow, unreliable, and difficult to debug, then just fix the f****g Problem (Hint: it's most likely not the e2e tests). OK, split it up: - slow: There could be two reason for that: Either the test-code is slow, or the production code is slow. In either way it's a bug: Just fix it. Also parallelize the test g
I am not sure if I understand the following statement correctly, could you please explain a little more or give a simple example? Thanks. “because your tests did not cover a specific edge case in an area that did have code coverage” ReplyDelete Not the original author, but there are several levels of code coverage (see also https://en.wikipedia.org/wiki/Code_coverage#Basic_coverage_criteria): - Li
I get what the author is going for here, but I don't totally agree with them. I think the easiest and cleanest approach is to move the list of users out of set up and into the test (as the author recommends), but personally I would keep the loops and reorganize the code a bit to make it easier to read, so kind of like a DRY DAMP approach? pseudo code: def register_list_of_users(user_list): to_retu
Interesting. I would caution that some of the tips are very culture-dependent. For instance, the example of not criticizing the person - I would rather have someone tell me straight in my face "Your approach is adding unnecessary complexity" than go around in circles and word-dancing around it, I would appreciate the honesty and the respect for my time (the 2nd way of phrasing is longer. but more
The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments. TotT 87 GTAC 61 James Whittaker 42 Misko Hevery 32 Anthony Vallone 27 Patrick Copeland 23 Code Health 20 Jobs 18 Andrew Trenk 11 C++ 11 Patrik Höglund 8 JavaScript 7 Allen Hutchison 6 George Pirocanac 6 Zhanyong Wan 6 Harry Robinson 5 Java 5 Julian Harty 5 Alberto
I would love to see some side by side examples in the style shown above where you used the rules of thumb listed. Could anyone point me in the direction of some examples? Thanks. ReplyDelete
Interesting, yet not so surprising that larger tests are more flaky. Here are a couple of quick thoughts: 1) I'm not so sure that the linear trend is a good fit, given the large variance for larger tests (looks rather heteroscedastic to me). 2) It's surprising to me that very small tests (unit tests) are flaky. Is there a pattern in these small flaky tests? 3) The analysis of Android emulator is i
You started off by saying that in most organizations there's a dedicated team of engineers (usually tech leads or seniors) who use their free time or a committee to oversee initiatives and guidelines to propagate best practices. From the way the article is written it sounds like you're saying you do something else, but you're example of having 20%ers doing this seems the same as "free time" to me.
TotT 88 GTAC 61 James Whittaker 42 Misko Hevery 32 Anthony Vallone 27 Patrick Copeland 23 Code Health 21 Jobs 18 Andrew Trenk 11 C++ 11 Patrik Höglund 8 JavaScript 7 Allen Hutchison 6 George Pirocanac 6 Zhanyong Wan 6 Harry Robinson 5 Java 5 Julian Harty 5 Alberto Savoia 4 Ben Yu 4 Erik Kuefler 4 Philip Zembrod 4 Shyam Seshadri 4 Adam Bender 3 Chrome 3 John Thomas 3 Lesley Katzen 3 Marc Kaplan 3 M
How do you mark a test failure as flaky? Do you have an automated/intelligent system that flags a test run failure as flaky or do you do it manually? ReplyDelete Yes, pacts were a strong influence on what we did. However, we never went quite as far as they did, and cut out some of the stuff that makes pacts very powerful in theory, but hard to write in practice. Most importantly, instead of writin
Is there still manual testing tasks which TE's do? Is there still a proportion of manual testing which takes place? ReplyDelete Hi Steve, TEs do not perform manual testing. However, manual testing is used by some teams at Google. In this situation, another responsibility of TEs is to formulate and execute a plan for automating as many manual tests as possible. _Matt Delete
Just letting you know. There's a console error when the page loads. Error reads : GET https://2.bp.blogspot.com/_VvKHc_qcUVo/SarWOMfpOqI/AAAAAAAAASE/mM4LwFW8ysE/S45-s35/msn.jpg 404 () comments.js:3 Failed to execute 'write' on 'Document': It isn't possible to write into a document from an asynchronously-loaded external script unless it is explicitly opened.Xb @ comments.js:3comments @ comments.js:
Hi Anthony, Thank you for this comprehensive article around test planning and considerations for writing test plans. We should also consider the source document based on which the test plans are constructed. The source documents are usually functional specification documents. I have seen test teams lay more emphasis on the test plan templates and organization and tend to ignore the content within
I hear you. same issues, same solutions. but we have another tool up our sleeve, we have a section called Reservoir that runs all new tests added in a loop for a week to determine if there is any flakiness in them, in that time they are not yet part of the critical CI path. happy to hear we are not alone. good day. ReplyDelete Thanks for the great blog-post! It seems that you categorize flakiness
Would be able to provide more detail with regards to expectations of the TE when it comes to their technical abilities, is it similar to SETI. Would TEs at google say that they are automation experts, or are they more focused and specialized on manual testing? ReplyDelete I believe the article is saying the TEs are good at making test plans, risk analysis, and manual testing while SEIT/SETI excel
Meanwhile I share the opinion, I have problem with measuring the shape - just for curiosity, how you suggest to measure the size of unit/integration/E2E tests? Comparing the coverage they have, a few E2E test can generate much higher coverage than several unit tests. Comparing numbers, and having n thousands of unit tests and having only <100 E2E tests, this would still be presented as pyramid (we
Does the integration test mentioned in the article mean the test between many small libraries in the app? ReplyDelete Hey Allen, You need 2 types of integration tests. 1) Integration test between client and server, for more details on integration tests, see the backend testing section of blog 2) As you mentioned, integration tests between small libraries in the app. Delete
TotT 90 GTAC 61 James Whittaker 42 Misko Hevery 32 Anthony Vallone 27 Code Health 23 Patrick Copeland 23 Jobs 18 Andrew Trenk 11 C++ 11 Patrik Höglund 8 JavaScript 7 Allen Hutchison 6 George Pirocanac 6 Zhanyong Wan 6 Harry Robinson 5 Java 5 Julian Harty 5 Alberto Savoia 4 Ben Yu 4 Erik Kuefler 4 Philip Zembrod 4 Shyam Seshadri 4 Adam Bender 3 Chrome 3 John Thomas 3 Lesley Katzen 3 Marc Kaplan 3 M
Don't know if my first comment got through, so I'll try again. Did you try correlating a project's code coverage and the amount of open issues? ReplyDelete
Thanks as a QAE, I was so tempted to write all my validations in a single test esp when there is a continous flow happening. Now that temptations has eased. Howeve this because more challenging when we write UI Tests with a lot of inputs before it even lands on the page to test. Where is the line to be drawn before going crazy and write automation tests for each test case vs combining all tests in
Looking forward to getting my hands on Espresso. Would be interested to get your thoughts on appium. ReplyDelete
次のページ
このページを最初にブックマークしてみませんか?
『Google Testing Blog』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く