Tip:
Highlight text to annotate it
X
SPEAKER 1: Hello, everyone.
And welcome to the SQA Forum's webinar hosted by SmartBear
Software, who is a long time sponsor of SQA Forums.
Today, you will learn from Sergei Sokolov, Steve Miller,
and Nick Olivo, tips and tricks on overcoming six
common testing challenges.
During the webinar, please make sure you pay attention to
the chat window on your control panel.
There, I will give you instructions on how to win SQA
Forums apparel and goodies.
There will be three chances to win during the webinar, so pay
attention to the messages I send.
So stay tuned, and Sergei, please take it away.
SERGEI SOKOLOV: Good morning.
Good afternoon, everybody.
Welcome to our webinar on overcoming six common testing
challenges.
I'm very pleased to introduce two really smart bears,
colleagues of mine, Steve Miller and Nick Olivo, who are
our respective solutions experts at SmartBear Software.
Steve is a vice president of ALM solutions, previously the
president of Pragmatic Software, the company
SmartBear merged with a couple years ago.
Nick, well, Nick kind of defies categorization.
We'll call Nick the complete superman.
After becoming a super user at TestComplete at his previous
company, Nick joined SmartBeart about four years
ago, and now is the total expert as he is, both
prospects and customers get the most out of TestComplete's
[INAUDIBLE]
automated testing.
So this is generally what we're going to talk about.
We will focus on hopefully familiar to
everybody pains of testing.
Before we extend our opinion about what they are, we'll
speak a little bit about who or what is SmartBear so that
we gain some credibility in your eyes.
We'll then dive into the details about the six common
testing challenges and solutions as we at SmartBear
offer up those to customers.
I hope you didn't expect something else entirely out of
this webinar.
As they say, this flight is departing to Anchorage,
Alaska, and if you're heading to Miami, this is the time to
get off the plane.
Everybody still with us?
Great.
Nick, Steve, and I will take turns to present.
We will talk about a couple of common testing things each.
We will summarize after that and go to Q&A. I hope you ask
at least some tough questions, and we'll provide some of the
clear answers.
And then we're going to have a Twitter after party.
There's going to be a Twitter bash.
We're going to send the hash code after that, sorry, during
the webinar.
And as far as what if we didn't cover what you were
mostly concerned about, you can ask questions using the
questions feature in your GoToWebinar panel.
We will respond to those to the extent we
have time at the end.
And for those whose questions haven't been answered join us
on Twitter with the hash tag HPAlternatives.
And if your pain is that right before the webinar, your boss
came over and told you to go and do some emergency testing,
don't worry we'll post a replay of the webinar in its
entirety on SmartBear.com.
It also will be available to all registered FMDs.
So who is SmartBear?
Well, we are a software company.
We make great tools, which are used by thousands and tens of
thousands in the field.
Among those tools are TestComplete for automated
testing, QAComplete for Test Management, SoapUI Pro for API
and web services testing, and then a number of others for
peer code review, life cycle, application management,
performance profiling, load testing, web
monitoring, and so forth.
We are proud to say that these tools are built by
users for the users.
We have a lot of people on staff who were developers,
testers, and managers.
So they have direct input into how we make the tools.
All our tools have open free trials, so it's
always free to try.
And we're very proud of our top notch customer support
that's available even with trials.
So if any of these sound familiar, you've come to the
right place.
Well, everybody says setting up automated testing should be
easy, right?
Well, right.
More often than not, it's actually somewhat a pain,
because no two applications are created equal, and it
seems no two releases are created equal either.
So the challenges with automated testing, despite all
the claims to the contrary, are there.
Some of them relate to complex systems.
The systems that require multiple testing to be
exercised, where you need to run one test, one machine,
then follow up to that on another machine, and then pull
together the results and the logs.
And many tools just aren't set up to do that.
Even if you've managed to do that, very frequently only a
select set of people have visibility into what the
results were.
Whether it's because of the team organization, whether
it's because of the tools, or whether it's because of the
browsers that some tools don't support.
A significant factor in testing being a pain is not
being able to use the right tool.
The tool you have, the enterprise may not support the
technology or not support the technology well.
And then you have to go through various hoops to
actually get it to work and deliver the results.
Sometimes amounting to just doing everything by hand.
What that leads to.
Well, unfortunately, we've all seen that.
We need to spend another late night at the office.
Obviously, we all want to live normal lives, testers or not,
and automation is supposed to be helpful.
So now, we are going to take up a few of the pains and
detail how we can actually Tylenol them away.
Well, let's start with setting up automated testing.
Nick, it's over to you.
NICK OLIVO: All right.
Thanks very much, Sergei.
Hello, everyone.
Sergei asked me to talk today about a couple of the most
common pains that folks are going to encounter during the
course of their test efforts, and the first one is that
setting up automated testing can be a
slow and complex process.
You see, one of the great strengths of the QA team is
its diverse background.
You're going to have people from all walks of life testing
software, and that's good, because each person brings a
different perspective to the table while testing.
But the downside here is that each user has a different
comfort level when it comes to test automation.
And if you've been doing automation for any length of
time, you know that there are some people out there who just
don't have the needed skills to automate effectively.
And as such, automation, above and beyond simple record and
replay, may be out of their reach.
Even if your team's automation power user builds reusable
function libraries that can handle very sophisticated
tasks, a non-technical person may not fully understand how
to use those functions effectively, and that's going
to hold them back and keep them some automating.
And because of that, more work falls back on the automation
power user, either because they have set up the slack or
because they have to spend more time re-explaining how to
call a function or which function to
use in a given situation.
So TestComplete, our automated testing tool, solves this
problem via a feature called extensions.
These allow you to leverage a power user's knowledge by
turning complicated function calls into wizards via
TestComplete's built in tool set.
The end result is that you can enable nontechnical users to
do things that they normally wouldn't be able to do, and
this has the added benefit of freeing up bandwidth on the
team's power user, because now they don't have to spend their
time re-explaining things.
So I'd like to take a few minutes and show you an
example of how these extensions look like and what
they can do.
I'm going to jump on PowerPoint here.
I've already got our automated tool,
TestComplete, loaded up here.
And this is the code that I would normally have to write
in order to do a complicated action like verify data inside
the Windows registry.
If you guys have ever tried to automate working with the
registry before, you know what an absolutely terrifying
experience that can be, because if you make one wrong
move in the registry, you can mess up your entire system.
So to help with that, you may create some reusable functions
that people can use, like what I've done here.
But the challenge is people then still need to remember
where that function lives.
They need to remember the proper syntax to include the
function when they want to call it.
And then even when calling the function, as you can see here,
they need to remember things like the appropriate syntax,
how to escape various strings and whatnot, and so this may
be off-putting to users who don't have a coding
background.
So let's take a look at this exact same code but turned it
into an extension.
And now instead of verifying the registry via the script,
I'm going to verify the registry via this simple
little wizard.
And everything that you see right here, this form, all the
controls within it, and the code that lives behind it is
actually built right inside of TestComplete, so I don't need
any other tools.
And now, instead of relying on my user to remember the
string, HPLM control slash whatever, I just
say, OK, Run Regedit.
Select the key that you care about, like maybe want to
verify this application's path key.
All right, we say Select Key.
And now TestComplete's going to read in that key, tell us
the expected key, the expected value.
We say OK, and that's it.
And now, when this runs, TestComplete
will look in the registry.
It will say, your [INAUDIBLE] key contain that value.
If it does, fantastic.
We put a success message out in our log file.
If it doesn't, we put an error that contains what we were
looking for and what we actually found.
So we've taken something that would normally require a lot
of coding or in depth knowledge, and we've turned it
into something that anybody can use.
So as a power user, you can build that extension.
Give it to everybody on your team.
It then becomes part of the TestComplete install.
Like right here, it shows up right inside
their checkpoint pallet.
They don't need to remember any special code to call it.
They don't even remember how to format strings.
They don't need to remember the appropriate syntax to
include things.
Now it's just a matter of dropping it on to the form,
clicking three buttons, and they're done.
So this is a really great way to help make test set up a
whole lot easier and faster throughout the
course of your testing.
Now another common pain that comes up in testing teams is
that you can't run dependent tasks in different
environments.
In some cases, you may need to purchase an entirely separate
product, or you need to cobble together some convoluted shell
scripts in order to run tests in this fashion.
It would be nice if you could just say run everywhere and
have your tests kick off on all the selected environments.
But you really can't do that.
It's extremely difficult to coordinate tests to run
together as well.
So let's say you've got two systems, and system one is
going to enter transactions.
And system two is going to approve those transactions.
Ideally, what would happen is system one would enter a
transaction.
System two would approve it, and then they would back and
forth until they were done.
But coordinating that could be a very involved process.
And then finally, you'd need to figure out some way to
gather up all the results from each of the systems that's
participating in that test and store them in
a convenient location.
Now lots of teams have implemented scenarios like
this, but it's a pretty involved
and convoluted process.
So the solution here is a feature
called distributed testing.
This allows users to have a single central system that
orchestrates and coordinates what tests get run on what
systems and when.
These machines can be in your lab.
They can be virtual machines.
They can even be systems living out in the cloud.
And as those tests run, users get a remote desktop window
into each system participating in the test, allowing them to
keep tabs on the test product progress.
And then finally, once the tests are finished, all the
results get copied back to that orchestrating test
system, which allows you to easily review the test results
and determine the cause of any failures.
So I'd like to take another couple of minutes and show you
how that looks.
So we're going to come back into TestComplete here, and
TestComplete's capability to do this is
called a network suite.
And what this allows you to do is start out by defining those
systems that will participate in your tests.
So here, I've got two virtual machines that are out on my
network, and I've filled in the names of those systems.
And I've also set up TestComplete's little
automatically log on to those systems.
I've put in the appropriate credentials in
order to login there.
Once you've got your systems defined, you can create what
are called jobs.
And jobs are just collections of tests that will be run on
those remote systems.
So here you can see, I've got two tasks inside my job here.
I'm going to start out by loading orders on this
particular system.
And then I'm going to process the orders
on the second system.
And to run this all I have to do is right click on the job
and say run.
And now TestComplete is going to call out
to both those systems.
You're going to see some data pop up on my screen here.
You'll see a remote desktop window on the right hand side
of my screen for each one of those systems, and we'll be
able to keep tabs on whether or not the
tests are running properly.
As the tests run, we can keep tabs on their status via the
state column here.
So I can see right now that both my tests
are currently running.
And now I see that my top test has finished out.
It's stopping.
My bottom test is still entering in some data, and now
it's finishing out.
It's copying its results back to me.
And when the tests finish, TestComplete switches gears,
and it shows me this gathered up log file that tells us each
one of the systems that participated in the test.
So we had two tasks that ran, and they were both successful.
And now here's my test from system number two.
I can see that one, and I can drill in and I can see all the
steps that it performed on that system.
And then I come over here, and I can see all the steps that
were performed on my order placement system.
So now if I had 10 or 15 tasks that I wanted to coordinate
across multiple environments, I could set them all up right
here and run them, keep tabs on their status, and analyze
all the results all from the comfort of my own queue.
This way I don't have to get up, run all over the office,
and pull data files.
Put them on to appropriate network location.
Everything is gathered right here for me.
So distributed tested and network suites take it much
easier for you to coordinate those very complicated and
convoluted scenarios into a much more cohesive package.
All right.
Now to turn things over to Steve Miller, who's going to
talk to you about the next pains in the process.
STEVE MILLER: All right.
Thanks a lot, Nick.
That network testing is really cool by the way.
So now, let's talk about best practices
for automated testing.
One of the things that we want to address here from our pain
is if you're doing all that running in the background,
many times, you don't have visibility to show all of that
run activity to others on your team.
So let's say that we develop some automated tests, and we
set them up to run with each build.
Well, that's good, but what types of pitfalls can we
really run into?
One of the pitfalls that we can run into here is that the
automated tester really is the only person that can see the
automated test results.
But if we had a perfect world, any time an automated test was
run, our entire team would be able to see what was run, what
passed, and what failed.
Now if your team has a test lab set up, very similar to
what Nick was just showing you a minute ago, most likely with
different test lab machines, it's also important to easily
find out what automated tests have been run on each one of
those lab machines.
For example, it'd be really cool if you were able to know
that code changes that were committed caused an issue on a
specific operating system or browser configuration.
Now another painful point for automated tests is setting
them up to run at specific dates and times.
Once you've taken all the effort to create your
automated tests, it would be really ideal if you could
easily schedule them to run on specific lab machines at
specific dates and at specific times.
Finally, when you're working on a specific release, it
would be really important to know how many automated tests
as well as manual tests have been run.
And this will provides you a much better understanding of
the type of test coverage that you have.
All right, so what's the solution to providing a much
better visibility of your automated test
and your manual runs?
Well, the solution really revolves around giving your
team visibility to both your automated as well as your
manual test runs.
And if you're able to do that in a browser, that makes it
even easier for people, because most people have
access to their browser.
Better yet, it'd also be better if you could run it
regardless of what type of browser you use.
I know that there's some other tools on the market that only
work with Internet Explorer for example.
So it would be really cool if you had that visibility
regardless if you were using Internet Explorer, Firefox,
Google Chrome, Safari, whatever your
favorite browser is.
So that's one of the things that
would be an ideal solution.
Now the solution that you use from browser perspective also
should allow you to be able to quickly schedule your
automated tests without doing any real messy scripting, or
command files, or anything like that.
It should just be built into your browser solution, allow
you to schedule those out without any of those hassles.
Finally, the browser solution that you choose should also
allow you to visually see all of your test runs in some type
of graphical format.
If you're able to see them on a dashboard or report, that
would be absolutely ideal.
So how could all of this work?
Well, one of the solutions that we provide is a tool
called QAComplete, and it integrates really nicely with
the TestComplete tool that Nick showed you earlier.
All you really have to do is you download a little plug-in,
and then those two tools work really
nice seamlessly together.
What would that look like?
Well, if you log into QAComplete, which we're
showing here, you'll notice that in the QAComplete
solution there's this area here called automation.
And in that automation area, you can come here and very
easily see what automated test runs have happened against
specific machines or hosts.
In fact, I can see this SM precision machine here that
we've run one test today.
I can see that it's passed.
I can see how many tests were inside of it.
I could even look at the duration, how long each one of
those took to run.
If I wanted to see what ran yesterday, I can see that we
actually had two runs here.
We had one that passed and one that failed.
Now the other cool thing about this too is Nick was showing
you earlier how you can run a series of tests, right.
And with each one of those tests that you run, you can
actually look at a log file that will tell you
specifically what passed and what failed.
Well, one of the really interesting features here is
for any of these runs that you do, you can simply click on
the run activity, and it'll take you directly into that
log file so that you can look at the details of that
particular log.
So that's one of the nice features here, is that you can
come in, look at any particular test that's run.
You can see step by step what actually happened.
We can see here that a file opened.
It even uses the TestComplete's visualizer that
will show you what the screen looked like when you were
actually running the test, so that's one of the interesting
features there.
Now, we also talked about the fact that not only is it
important to be able to see that run activity, it's also
important to be able to schedule tests from this
browser based solution here, because if you've ever worked
with some of the other automated tools a lot of times
you have to create a whole bunch of scripts and really
messy command line type procedures.
Well, we've simplified that for you.
Notice here, in QAComplete there's
an automation scheduler.
And if I go to this automation scheduler, I can simply come
in here and click add, and I can choose to add a schedule
for TestComplete.
I can choose a project, and then I can choose from that
project, what time I want it to run.
So let's say that I wanted to run a full regression tonight
at 9 o'clock PM.
I can also, while I'm at it, go ahead and set this up to
run every day of the week, so I can I come here and choose
specific days that I want this to run.
Now, the real key advantage here is again I don't have to
write any of that messy scripting.
I can easily come here and set this to run on any machine
that's running TestComplete or TestExecute.
And because this is browser based, I don't have to have a
VPN connection back into my office, any of
that kind of thing.
I can do it securely from my home with no problems there.
Now, one of the other things that we talked a little bit
about earlier was being able to see all of that run
activity graphically.
That's what this allows us to do.
So I can come in here, and I can look at my
manual test runs here.
I can see what browsers they're running under, how
many passed, how many are failed.
I can look at all of my test runs linked back to a specific
requirement.
And lo and behold, here are all of the automated tests
that have run day by day, showing me clearly how many
passed, how many have failed.
And all of that information is just right there nicely
aggregated together for us.
So let's take a look now at the next pain.
So the next pain is how do we know if we have enough
coverage when it comes to designing our test for a
specific release.
Let's face it, any time you're writing new code for a release
the possibility of breaking an existing feature exists.
It just does.
So we need to be able to have some way to
mitigate that risk.
Also, when we're working with new features that we're
implementing in a release, we also need to feel comfortable
that we've created enough test cases to fully test the
features that we're delivering in that new release.
And that should include both positive and negative tests.
And once we have enough automated and manual tests,
it's also important, equally important, that everybody on
the team has visibility to what tests have been created,
what tests have been run, how many have passed, and how many
have failed.
And then finally, once we begin our testing, it's also
important to ensure that all the defects that gets better
found during the testing process get
fixed in a timely manner.
So if you're using email or some other communication like
that to report defects, you know how risky that is.
Defects can get lost in the shuffle, and they just don't
get fixed before you get into production.
And that can cause you some support issues
once you get there.
So what's the solution for this?
How do we ensure that we have enough test coverage and that
all of the critical defects get fixed?
Well, if you follow Nick's advice, you will be creating
automated tests that you can run on each one of your builds
that you do, and these tests will notify you if you broke
any of your existing functionality.
So that's one thing you can do.
Next, for any new features that you're working on in a
specific release, you'll also need a way of linking those
tests back to the requirement to ensure that you have enough
positive and negative tests that are going to fully test
each one of those features that you're implementing in
your release.
That process is called traceability, and it's really
critical to ensure that you have enough test cases that
cover that traceability for each one of your requirements.
As you begin your testing, you're going to be sure to log
all of your defects, and it's really good to place those in
a central repository so that both your testers and your
developers can access those defects and get that
visibility they need.
So even better, if you use a tool that automatically sends
email alerts for example to developers as you create
defects as they are assigned to them.
And then once those defects are fixed by the developers,
if it had a way to alert the tester that that defect had
been fixed, you're even better off there.
Now, finally, once all of your defects are logged, you
probably want to keep an eye on the number of defects that
are generated.
You'll want to look at that daily, and you'll want to
ensure that none of those defects get stale.
If your defect tracking solution has some way to do
escalations, in other words, figure out what defects are
getting old and send you an email, all the better.
OK, so what I've done so far is I've shown you how, or
talked about how you can utilize tools to help you
manage your requirements, how you can get better
traceability back to the test management solution, how you
can track those defects.
So what tools do we offer to help you do that?
Well, QAComplete absolutely allows you to do that, and
I'll go through this very quickly.
One of the things that you can do with QAComplete is you can
define your releases.
Notice that we have this little Releases tab.
So you're probably going to be working on a number of
different tools within your organization that you're
developing.
So for each one of those product lines, you can break
those product lines down into specific releases.
And if you're doing agile development, you can even
break your release down into agile sprints.
And notice for each one of these agile sprints, you can
allocate specific requirements that you're going to be
servicing in that particular sprint.
You can also see here that you can define tests against those
requirements, which will be shown here as well.
And then of course, when you begin doing defect tracking,
you'll be able to see all of your defects here as well.
Now again, whenever you're defining your requirements,
it's important when you define your requirements, let's say
that I'm defining the ones for sprint one of this release,
that you can see those very easily from your test
management tool.
And for any one of these requirements, it's also
important to be able to drill down into that information and
look at all of the tests that have been allocated to a
specific requirement.
If you want to take it a step further, it's good to have a
traceability report.
And that traceability report should be able to look
holistically at a specific requirement, show you how many
defects have been generated against that requirement, show
you how many tests have been linked up to that requirement,
and then what state those tests are in as well.
When you're looking at your tests library, it's important
to have a central repository where you can store those
tests and be able to reuse those from release to release.
It's important to also be able to define different
configurations, because we all know that we have to support
mobile in a lot of ways.
We have to support different browsers
and operating systems.
So it's important to be able to define those
configurations.
And then as you begin your testing cycle, it's important
to be able to group a set of tests together and run those,
or a specific release, under those different
configurations.
For example, if I go here and I look at sprint one that
we're working on here for this particular release, and I look
at any of these test sets, you can define your test sets
here, and you can define all of the tests that are part of
that test set.
You can even go into here and choose what releases you're
going to run them under and then what configurations
you're going to run.
And then as you run those tests, you can automatically
generate defects that will show up here in your defect
tracking solution.
And of course, you can get all kind of analytics here that
will show you how fast you're getting through all of that
defect management.
One of the things that QAComplete also allows you to
do is to set escalation rules.
And those escalation rules will allow you to quickly find
any defects that are getting stale.
So those are all kinds of things that you'll want to be
looking at whenever you're doing your test management,
both from an automated perspective as well as a
manual perspective.
Well, that kind of covers my section of this.
We'll now turn it over to Sergei to take you to
the next pain here.
SERGEI SOKOLOV: Thank you, Steve.
Well, with all these advanced capabilities, I'm sure some
folks can't help but ask this question, well, I can't really
use the right tool for the job.
And that can be basically two situations that we encounter
more frequently than others.
They're kind of on the two ends of the spectrum.
And one, we'll call the enterprise solution, which
typically is one massive set of tools from one vendor.
And that set of tools tends to be somewhat expensive,
includes a wide array of tools, pretty much just short
of a coffee maker, and some of those you might need, and some
just come as part of the package.
And of course, what happens is those tools are forced on you,
because they were in the package.
You didn't select them, you just end up using them no
matter what.
These suites frequently require full time
administrators, which is kind of unfortunate, because
anything advanced seems to require scripting.
And very few people can actually do that in a
reasonable time.
Cost is also a disadvantage that frequently people cannot
get enough licenses that requires them to be
productive.
Testing [INAUDIBLE]
under equipped with test tools or plug-ins, and as a result,
testing certain applications and
technologies becomes a challenge.
Now, on the other end of the spectrum is a hodgepodge of
free open source tools in Microsoft Office.
Yes, it's free to acquire.
It doesn't seem like it's free to use, because they are
different tools.
They are not necessarily meant to be scalable.
The pieces don't talk to each other.
And in the end, what happens is there's too much effort
required to maintain traits, but at the same time, the
initial advantage of, oh, we can get these tools for free
seems to cause or influenced the decision.
Well, what can be a solution for that?
Lying at the core of the solution, we would like to see
that whoever uses the tools, chooses the tools, so there is
the power of the user.
The key here is to recognize that while, of course, there's
more than one vendor, and with more than one vendor out
there, there quite possibly is a best of breed solution that
combines offerings for multiple.
There certainly is a middle ground between conglomerates
of tools from one vendor and the free offerings, which are
excellent tools, highly capable, not too expensive,
and very deployable.
You should consider, obviously, when selecting a
tool, the technology roadmap, both of your own enterprise
and the vendor that you're selecting, because time and
again, we see that the development technologies
advance faster than test tools, and in some cases,
substantially faster than some test tools.
So if we take up to a year for certain vendors to support as
much as Firefox 4, and I can't even remember
when that was released.
With heterogeneous tools integration points become
important, but fortunately, many tools, and SmartBear
tools included, have a lot of integrations with existing
tools whether those are Quality Center, the popular
Jira, Open Source Subversion, or many others.
And we'll also have extensibility built in so that
we can easily do the interfaces or the
systems that customize.
On some sense, [INAUDIBLE] testing comes down to cost.
And it used to be an old adage that, well, if we need to save
cost, then we'll just eliminate testing, whether
it's schedule-wise, budget-wise, or resource-wise.
Of course, one aspect is that test tools are expensive--
some of them.
And an alternative used to be, well, that obviously doesn't
solve the problem because manual effort may be just as
expensive particularly if it needs to be repeated.
Testing once is fine, but you frequently need to do it two,
three, four, seven times in the span of a week as fixes
accumulate.
And then it's no longer economical.
With some industries, testing is mandated so they just
absorb the expense, but obviously that reflects on the
bottom line.
With the added expense of testing, cost effectiveness is
a major issue.
And of course, there will be reluctance to spend big bucks
on just tools without a guaranteed ROI.
And there's no such thing as a guaranteed ROI.
Further, there is a background scenario, where if you invest
too much in a given vendor, and the solution doesn't prove
to be valuable, then that could pretty much can
somebody's career, which nobody is looking forward to.
More sophisticated tools, or more complex tools obviously
need more skilled resources or certified engineers that will
drive up the cost of testing as well.
Their [INAUDIBLE] is personal cost, which frequently comes
from, well, we need to stay one more late night in the
office to finish this testing.
One solution to this is, of course, try before you buy.
It's easy to explore more affordable solutions given
that there are many of them out there.
You can prove that a given tool works for your technology
before you battle it out with sales or your own boss, so
you'll have all the ammunition to make the case.
So you can show value from the proof of concept to simplify
the executive buy-in.
In the scenarios of proofs of concept can be complicated, so
support during that period is very important.
So make sure that the vendor you work with provides work
during trials as much as they do with the purchase tools.
SmartBear does it a 100%.
So now we've presented all that young material, so what
can you actually do with it?
Well, there are a number of scenarios that will present
you natural opportunities to compare the tools
of different vendors.
Then, don't just take our word for it.
Put our tools side by side with the best you have and see
what comes out on top.
After all, the trial is free, and it wouldn't take much time
to set the SmartBear tools up.
The technology field is changing faster than ever.
And obviously, we want the test tools to stay in step.
New browsers come out almost every month these days.
And obviously, if you have a web application, you need to
be able to ensure it's compatible
with the latest versions.
So you may have a technology point that your current tool
set just doesn't seem to support, and that's a group
cause to look at some other tools.
If you need to test a Flex application, which seems to be
used in enterprise context more and more these days,
right up out of the gates without writing additional
test API code that, for example, QTP requires, you can
try out TestComplete.
If you need to provide everybody on the team
developers, testers, management with uniform view
into manual and automated test coverage, try out [INAUDIBLE].
If you need to set up a simple load test, don't want to write
hand scripts or deal with megabucks installation, try
our LoadComplete instead of LoadRunner.
If you're dealing with web services and need to mock up
some parts of the system on the cheap, we have SoapUI Pro.
So give SmartBear a try.
A trial is always free.
So now we've come to the most entertaining portion of the
whole presentation, your questions.
I've seen there are a number of questions that have
already come in.
I'm sure that almost any section caused some thoughts,
and we encourage you to ask us anything that will pertain to
your business, your problems, your technologies.
And I would hope to provide you with some answers and
technical solutions.
So there are already some questions, as I said, that
have come in.
Let's see, Nick, there was a--
just navigate to the right place--
there were a couple of questions about extensions.
[INAUDIBLE]
asked whether extensions are predefined or they can be
created by power users.
Can you elaborate on that a little bit?
NICK OLIVO: Sure.
So extensions are created primarily by power users.
We do have some predefined extensions that ship with
TestComplete, and there are others that are available for
download off of our website.
One of those being the registry checkpoint that I
showed during the course of the session.
But typically, the extensions themselves are created by
someone who has had a coding background, and they're
distributed out to everybody else in the team so that they
can take advantage of that.
SERGEI SOKOLOV: Thank you.
Regarding the network testing, what are network requirements
for network tests?
Are they constrained to the same geographic location, or
can they be run anywhere as long as there is a VPN?
NICK OLIVO: As long as there's a VPN, TestComplete can call
out to that remote system.
So if you've got an office in Boston and an office in
Shanghai and they are connected via VPN, then you'll
be able to use systems from both offices in
a distributed test.
SERGEI SOKOLOV: Thank you.
There were a couple of questions from Elaine that
refer to this product.
Would this product be good with new developments, or it
should be with maintenance and enhancement developments?
Since we're not sure which product of the two we showed
that really applies to, it was asked during the time that
Steve was presenting, I would direct that question to him.
STEVE MILLER: OK.
Yeah, Elaine, if the question is is it a good idea to use a
test management solution when you're just start getting
started with the development of a new product, or is it
better to wait until the product has been put into
production and gets a little bit more mature, my answer to
that would be you really want to get
started as early as possible.
Because if you can go in and you can create some good
processes around that, get into the habit of defining
your requirements, and having a place that everybody can see
those requirements and approve them, build a set of tests
around that, those tests that you created, that process
would be able to be converted into
regression tests later on.
And then as you mature, you can take those manual tests
and covert those right on into automated tests, and you won't
have to go through the rigor of running those manually.
You can just put them on automation.
SERGEI SOKOLOV: Thank you, Steve.
Well, there's one that goes right to the bone of it.
What are the benefits of QAComplete over--
well, it's actually not quite the right mix.
The question asked by Dennis is what are the benefits of
QAComplete over HP Quick Test Pro.
I presume that would actually read what are the benefits of
QAComplete versus Quality Center?
STEVE MILLER: I'll take that.
SERGEI SOKOLOV: Yeah, would you mind taking that one?
STEVE MILLER: Absolutely, I'd happy to, and we can also ask
Nick to chime in on how TestComplete compares to Quick
Test Pro as well.
So let's first talk about QAComplete versus HP Quality
Center or their new ALM suite.
And one of the primary benefits with QAComplete is
that it works on any browser.
So you can use QAComplete on Internet Explorer.
You can use it with Firefox, with Chrome, with even Safari.
And for those of you that have used Quality Center in the
past, you know that you're kind of locked into IE only.
One of the other benefits too is that I think that you saw
today how graphical QAComplete is and how easy it is to find
dashboards and reports.
And it has a much better run engine, where you can very
easily generate defects as you're running a test that
fails, you can easily create a defect from it.
And then finally, the big elephant in the room is how is
the cost different?
Well, QAComplete is about a fifth of the cost of HP
Quality Center.
I didn't misspeak there.
I said about a fifth the cost.
OK, so that's a big difference in the two tools.
Now, I'll turn it over Nick and let him talk about the
differences between TestComplete
and Quick Test Pro.
NICK OLIVO: All right.
Thanks, Steve.
There's a couple of differences right out of the
gate between TestComplete and QTP.
And a couple those are, for starters, we support five
different scripting languages.
QTP only has the one, so we have VBScript, JScript, Delphi
script, C sharp script, and C++ script.
We also have superior object recognition.
So TestComplete supports a boatload of different types of
controls right out of the box.
And even for those that we don't necessarily provide
direct support for-- if there's a brand new company
that just releases a new tree control for example- we may
not be able to record working against that directly, but we
can programmatically manipulate it.
So you're never left out in the cold as far as those kinds
of things go.
Other things to talk about obviously extensions are
something that QPT doesn't have.
Distributed testing, like we saw earlier, isn't something
that it has.
Also, we have a runtime license called TestExecute, so
that allows you to run tests on systems that don't need a
full automation license.
In fact, you can put that test execute license on as many
systems on your network as you want.
It's a concurrent licensed, so you have 10 licenses of it.
You can install it in a thousand places and then run
ten at a time.
And then finally, like Steve said, the cost.
TestComplete is about a quarter of a cost of a QTP
license, so it's actually possible to outfit an entire
team of testers with TestComplete as opposed to
what it will cost one license of Quick Test Pro.
So those are the high level items that you might want to
consider there.
SERGEI SOKOLOV: Thank you, Nick.
There were a number of questions related to cost.
Of course, in particularly given our emphasis at the end
of the presentation on that aspect, we also believe that
it's a very interesting topic.
But cost will differ depending on configurations, the
packaging of tools and so forth.
So the best way to actually get any specific answers would
be to contact SmartBear through the website.
There's a Contact Us option on the front page, and that will
put you in touch with a sales rep.
And they will be able to answer many of the questions.
We do publish our list prices on the web, so for any
product, you can go and look it up.
And you can also even buy our tools from the web as well.
There were also a group of questions regarding
integration points.
There were some specific systems that were
mentioned like Jira.
And Steve, I'd like you to elaborate a little bit on that
somewhat later.
There were questions about whether our tools offer APIs
for integration.
And the answer is yes.
Both tools that we show, TestComplete and QAComplete
offer APIs.
TestComplete API is in Chrome, so you can basically write
scripts against that and use that in your automation.
And [INAUDIBLE] as I recall, QAComplete has web service
interface that you can also utilize to establish
connections with third party systems.
STEVE MILLER: Yeah, and I like to-- it would probably be
interesting just to quickly show you how our
integrations work.
I can do that in about two minutes if you guys are
interested in seeing that.
I could show you how we can integrate with Jira very
seamlessly.
We have a number of different integrations that kind of come
out of box at no additional cost with our tool.
You can integrate with Jira, with Bugzilla, with HP Quality
Center, a lot of those different tools out there.
And here's just an example.
You're looking at QAComplete right now, and we're over here
in the defect tracking area.
Notice that I have some defects here.
But if I open up one of these defects, notice that I set
that up to automatically sync with Jira.
And all it was we have an integration
tool called Ops Hub.
You install that.
You configure the integration, and it's off and running.
It can do both a one-way integration as well as a
two-way integration.
In other words, you can create defects here.
It can ship them over to Jira.
You can create them in Jira.
It can ship them back here.
You can update them on either side.
It has conflict resolution, all that kind of stuff.
And then if you want to access that item directly from within
QAcomplete and open up Jira, it's just a single click, and
it'll bring that item right up.
It's very, very simple to set that up over in here.
One of the other questions that I saw coming in that was
a little related to this is that, well, hey look, I have a
number of test cases over in Excel.
Is there an easy way to pull those test cases into
QAComplete?
And the answer to that is absolutely.
You just go in here into your test library.
You click on Actions.
You click on Import on an Excel spreadsheet.
You choose that from your hard drive, and it takes you
through a little mapping wizard that'll map those
fields out.
It's that easy to get that data in there.
And then finally, somebody had asked, OK, well, typically if
I'm looking at QAComplete from a cost perspective what would
it cost if we let's say, had 10 users of QAComplete.
And Sergei said, well, it depends on what model you're
looking at, because we do offer a software as a service,
where we could host it for you.
The cost of that is $399 per user per year, so you're
looking at 10 concurrent users for about $3,990 a year,
pretty inexpensive.
If you need it installed locally in your own
organization and be able to run it from your own servers,
we offer an on premises addition.
It's $899 per user.
That's a one time fee.
So 10 years would be $8,990, just to give you an idea.
SERGEI SOKOLOV: Thanks, Steve.
There was--
which one should we go to next?
Steve, I think it's going to go back to you.
Is there a condition from HP Quality Center to QAComplete
or ALMComplete, and how complex is that?
STEVE MILLER: Oh, that's a very good question.
Yeah, we really tried to give you an easy on ramp for
getting out of Quality Center and getting into QAComplete or
ALMComplete.
We offer an HP conversion tool, and the cost of that
tool is a one-time fee of $3,000.
One of our partners built that tool.
They've done a really good job with it,
and it's really extensive.
It'll do things like it'll create all the folder
structures.
So if you were to have a folder structure that you
built in Quality Center, it'll bring that same folder
structure up over in QAComplete for example.
It will convert all of your requirements.
It'll convert all of your tests.
It'll convert all of your defects.
It'll even set up users.
If you've got users set up in Quality Center and you want to
bring those over into the tool, it can bring
those over as well.
It can set up custom fields and choice
lists, all those things.
It's really, really robust.
It's a pretty impressive tool.
SERGEI SOKOLOV: Thank you, Steve.
Let's see.
What else do we have here?
There was one licensing question regarding virtual
machines if I can find that.
Or did it disappear?
While I am looking for this one, do we provide tutorials
or training?
Yes, we have to provide both.
One of our signature approaches is that we post a
lot of material on our website, both in articles and
videos, and that applies to TestComplete and QAComplete,
both of them.
So not only are you able to get an idea of what the tools
do, but there are specific short videos covering specific
technical application aspects of all of our tools.
So if you go to SmartBear.com, then you'll find all of that
material at your fingertips, and you're free to share that
with your colleagues.
We also have our YouTube channel.
So if you search for SmartBear on YouTube, you will find
videos there as well.
OK.
That's an interesting question.
How can I proof of concept, we're a small company with a
QA team of two?
Is there a way to justify cost of purchase?
We have a lot of customers with small teams, who found it
very advantageous to use test automation.
As matter of fact, I just talked to a team last week.
They used TestComplete to test a very complex Flex
application.
There are two folks involved.
They utilized some other areas and the [INAUDIBLE] business
analysts to create the tests, but they were very happy.
And it seems like the investment worked very well
for them to automate the tests.
They have some compliance requirements, so they do need
to do testing, but obviously, TestComplete allowed them to
do that very effectively.
But I think the first step in getting the proof of concept
going would be to sign up for a trial and see
where it takes you.
Scripting languages in TestComplete, I think Nick
covered that.
That's five scripting languages.
Most commonly used are VBScript script and JScript.
I'm actually being reminded that it's now
the top of the hour.
We need to wrap up the webinar and invite everybody to join
us in the Twitter forum with #HPAlternatives.
It seemed that the Twitter API had some problems just
recently, but I guess the news is that it has been repaired.
The hashtag that's shown on this slide is wrong.
The one you want to use is #HPAlternatives.
So thank you very much for being here with us.
We were glad to have you.
We hope that you join us for the Twitter after party.
Thanks again.
We're going to wrap up the webinar now.