Contents:
The Checker Framework is an innovative programming tool that helps you prevent bugs at development time, before they escape to production.
Java's type system prevents some bugs, such as int count =
"hello";
. However, it does not prevent other bugs, such as null
pointer dereferences, concurrency errors, disclosure of private
information, incorrect internationalization, out-of-bounds indices, and
so forth. Pluggable type-checking replaces a
programming language's built-in type system with a more powerful,
expressive one.
We have created around 20 new type systems, and other people have created many more. The more powerful type system is not just a bug-finding tool: it is a verification tool that gives a guarantee that no errors (of certain types) exist in your program. Even though it is powerful, it is easy to use. It follows the standard typing rules that programmers already know, and it fits into their workflow.
The Checker Framework is popular: it is used daily at Google, Amazon, Uber, on Wall Street, and in other companies from big to small. It is attractive to programmers who care about their craft and the quality of their code. The Checker Framework is the motivation for Java's type annotations feature. It has received multiple awards. With this widespread use, there is a need for people to help with the project: everything from bug fixes, to new features, to case studies, to integration with other tools. We welcome your contribution!
Why should you join this project? It's popular, so you will have an impact. It makes code more robust and secure, which is a socially important purpose. Past GSOC students have had great success. (David Lazar became a graduate student at MIT; multiple students have published papers in scientific conferences.) You will get to scratch your own itch by creating tools that solve problems that frustrate you. And, we have a lot of fun on this project!
Prerequisites: You should be very comfortable with the Java programming language and its type system. You should know how a type system helps you and where it can hinder you. You should be willing to dive into and understand a moderately-sized codebase. You should understand fundamental object-oriented programming concepts, such as behavioral subtyping: subtyping theory permits argument types to change contravariantly (even though Java forbids it for reasons related to overloading), whereas return types may change covariantly both in theory and in Java.
Potential projects: Most of this document lists potential summer projects. The projects are grouped roughly from easiest to most challenging. Many of the projects are applicable beyond Google Summer of Code.
To get started, first do a case study of using the Checker Framework. Do this before submitting your proposal.
import
statements. Doing so bloats the size of the
diffs and makes it hard to understand the essential changes.
if
statement that always succeeds, just
to suppress a warning. Convince yourself that both branches can
execute, or else don't add the if
statement.
@SuppressWarnings
annotation,
explain
why the checker warning is a false positive and you are certain
the code is safe.
@SuppressWarnings
, then the annotations are correct,
your program is correct, and you don't need feedback. Congratulations!
You can try a more significant case study.)
Share the case study as soon as you finish it or as soon as you have a question that is not answered in the manual; don't wait until you submit your proposal. The subject line should be descriptive (not just "Case study", but "Nullness case study of Apache Commons Exec library"). You should give us access to
Once you have done this work on a small program such as from your coursework, you can repeat the process with an open-source program or library.
The primary result of your case study is that you will discover bugs in the subject program, or you will verify that it has no bugs (of some particular type). If you found bugs in open-source code, report them to the program's maintainer, and let us know when they are resolved. If you verified open-source code to be correct, that is great too; let us know and point us at the fully-annotated, verified program.
Another outcome of your case study is that you may discover bugs, limitations, or usability problems in the Checker Framework. Please report them. We'll try to fix them, or they might give you inspiration for improvements you would like to make to the Checker Framework this summer. You can also try to fix them yourself and submit a pull request, but that is not a requirement. You may discuss your ideas with us by sending mail to checker-framework-gsoc@googlegroups.com.
Note that we do not recommend that you run many different checkers on small, artificial programs. Instead, run one checker on a more substantial program.
Why should you start with a case study, instead of diving right into fixing bugs, designing a new type system, or making other changes to the Checker Framework? Before you can contribute to any project, you must understand the tool from a user point of view, including its strengths, weaknesses, and how to use it. Therefore, you need to complete a substantive case study first.
We are very happy to answer your questions, and we are eager to interact with you. Before you ask a question, read these “getting started” instructions (that is, this file) and search in the Checker Framework manual for the answer. Don't send us a message that says nothing but “please guide me” or “tell me how to fix this bug”. Such a message shows that you haven't thought about the problem and haven't tried to solve it yourself. It also shows that you have not read this document, and we don't want to work with people who cannot read instructions!
Your questions should show that you will be a productive colleague over the summer: tell us what you have tried, tell us what went wrong or where you got stuck, and ask a concrete technical question that will help you get past your problem. If you can do that, then definitely ask your question, because we don't want you to be stuck or frustrated.
Whenever you send email (related to GSoC or not), please use standard email etiquette, such as: avoid all-caps; use a descriptive subject line; don't put multiple different topics in a single email message; start a new thread with a new subject line when you change the topic; don't clutter discussions with irrelevant remarks; don't use screenshots (unless there is a problem with a GUI), but instead cut-and-paste the code into your message; if you are making a guess, clearly indicate that it is a guess and your grounds for it. If you violate these basic rules, you will look unprofessional, and we don't want you to give a bad impression. Bug reports should be complete and should usually be reported to the issue tracker.
Some GSOC projects have a requirement to fix an issue in the issue tracker. We do not, because it is unproductive. Don't try to start fixing issues before you understand the Checker Framework from the user point of view, which will not happen until you have completed a case study on an open-source program.
To apply, you will submit a single PDF through the Google Summer of Code website. This PDF should contain two main parts. We suggest that you number the parts and subparts to ensure that you don't forget anything, and that we don't overlook anything in your application. You might find it easiest to create multiple PDFs for the different parts, then concatenate them before uploading to the website, but how you create your proposal is entirely up to you.
The proposal should have a descriptive title, both in the PDF and in the GSoC submission system. Don't use a title like "Checker Proposal" or "Proposal for GSoC". Don't distract from content with gratuitous graphics.
If you want to create a new type system (whether one proposed on this webpage or one of your own devising), then your proposal should be the type system's user manual. You don't have to integrate it in the Checker Framework repository (in other words, use any word processor or text editor you want to create a PDF file you will submit), but you should describe your proposed checker's parts in precise English or simple formalisms and you should follow the suggested structure.
List the tasks or subparts that are required to complete your project. This will help you discover a part that you had forgotten. We do not require a detailed timeline, because at this point, you don't know enough to create one.
Never literally cut-and-paste text that was not written by you, because that would be plagiarism. If you quote from text written by someone else, give proper credit.
If you want to do exactly what is already listed on this page, then just say that (but be specific about which one!), and it will not hurt your chances of being selected. However, you might have specific ideas about extensions, about details that are not mentioned on this webpage, about implementation strategies, and so forth. If you want to do a case study, say what program you will do your case study on. Don't submit a proposal that is just a rearrangement of text that already appears on this page or in the Checker Framework manual, because it does not help us to assess your likelihood of being successful. (You can propose an idea that's here, but show what progress you have made.)
.zip
file or
provide a GitHub URL.
The best way to impress us is by doing a thoughtful job in the case study. The case study is even more important than the proposal text, because it shows us your abilities. The case study may result in you submitting issues against the issue tracker of the program you are annotating or of the Checker Framework. Pull requests against our GitHub project are a plus but are not required: good submitted bugs are just as valuable as bug fixes! You can also make a good impression by correctly answering questions from other students on the GSOC mailing list.
Get feedback! Feel free to ask questions to make your application more competitive. We want you to succeed. Historically, students who start early and get feedback are most successful. You can submit a draft proposal via the Google Summer of Code website, and we will review it. We do not receive any notification when you submit a draft proposal, so if you want feedback, please tell us that. Also, we can only see draft proposals; we cannot see final proposals until after the application deadline has passed.
These projects take an existing type-checker, apply it to a codebase (you can choose your favorite one, or you can ask for suggestions), and determine whether the type system is easy to use and whether it is effective in revealing or preventing defects. Case studies are our most important source of new ideas and improvements: our most useful features have arisen as a result of an observation made during a case study. Many people have started out “just” doing a case study but have ended up making deep, fundamental contributions and even publishing scientific papers about their discoveries.
You should do a small case study during the application process (or maybe a large one, depending on your ambition). A case study is the best way to learn about the Checker Framework, determine whether you would enjoy joining the project during the summer, and show your aptitude so that you will be chosen for the summer.
A set of large case studies is one possible summer task. The most common choice is case studies of a recently-written type system, to determine its usability. Another choice is to annotate popular libraries for an existing type system, to make it more usable.
Here are a few suggestions, but a case study of any type system distributed with the Checker Framework is of value.
When type-checking a method call, the Checker Framework uses the method declaration's annotations. This means that in order to type-check code that uses a library, the Checker Framework needs an annotated version of the library.
The Checker Framework comes with a few annotated libraries. Increasing this number will make the Checker Framework even more useful, and easier to use.
After you have chosen a library, fork the library's source code, adjust its build system to run the Checker Framework, and add annotations to it until the type-checker issues no warnings.
Before you get started, be sure to read How to get started annotating legacy code. More generally, read the relevant sections of the Checker Framework manual.
There are several ways to choose a library to annotate:
Whatever library you choose, you will need to deeply understand its source code. You will find it easier to work with a library that is well-designed and well-documented.
You should choose a library that is not already annotated. There are two exceptions to this.
Show that the ASM library, or the BCEL library, properly handles signature strings (or find bugs in them).
To get started:
git checkout typecheck-signature
Some challenging aspects of this case study are:
someString.replace('.', '/')
which converts from @ClassGetName
to @FieldDescriptor
. It also converts from @FullyQualifiedName
to @BinaryName
, but only for non-anonymous classes.
The full rules for that, and for other calls such
as someString.replace('/', '.')
, need to be worked out
and implemented.
Android uses its own annotations that are similar to some in the Checker
Framework. Examples include the
Android
Studio support annotations,
including @NonNull
, @IntRange
, @IntDef
,
and others.
The goal of this project is to implement support for these annotations.
That is probably as simple as creating aliased annotations
by calling method addAliasedAnnotation()
in AnnotatedTypeFactory.
Then, do a case study to show the utility (or not) of pluggable type-checking, by comparison with how Android Studio currently checks the annotations.
The Signedness Checker ensures that you do not misuse unsigned values, such as by mixing signed and unsigned values in a computation or by performing a meaningless operation.
Perform a case study of the Signedness Checker, in order to detect errors or guarantee that code is correct.
You will need to find Java projects that use unsigned arithmetic, or that could use unsigned arithmetic but do not. When doing the case study, it is important to type-check both a library and a client that uses it. Type-checking the client will ensure that the library annotations are accurate.
Here are some libraries that you could annotate (some are already annotated for you). You would need to find client code that uses the signedness-sensitive routines.
Integer
and Long
, these include
compareUnsigned
,
divideUnsigned
,
parseUnsignedInt
,
remainderUnsigned
, and
toUnsignedLong
.
DataInputStream
, ObjectInputStream
,
and RandomAccessFile
have readUnsignedByte
.
Arrays
has compareUnsigned
.
Here are some other possible case studies; you would need to determine whether this code is the library, the client, or both:
Your case studies will show the need for enhancements to the Signedness Checker. For example, the Signedness Checker does not currently handle boxed integers and BigInteger; these haven't yet come up in case studies but could be worthwhile enhancements. There may also be the need to write more annotations for libraries such as the JDK.
Java 8 introduced the
Optional
class, a container that is either empty or contains a non-null value.
It is intended to solve the problem of null
pointer exceptions. However, Optional
has its own problems.
Because of Optional
's problems, many commentators advise programmers to use
Optional
only in limited ways.
The goal of this project is to evaluate
the Optional
Checker, which warns programmers who
have misused Optional
.
Another goal is to extend the Optional Checker to make it more precise or
to detect other mis-uses of Optional.
These are just some suggestions; many other libraries need annotations.
Guava is already partially annotated with nullness annotations — in part by Guava's developers, and in part by the Checker Framework team. However, Guava does not yet type-check without errors. Doing so could find more errors (the Checker Framework has found nullness and indexing errors in Guava in the past) and would be a good case study to learn the limitations of the Nullness Checker.
This project is related to the Bazel build system, and was proposed by its development manager.
The Bazel codebase contains 1586 occurrences of the @Nullable
annotation. This annotation indicates that a variable may hold a null
value. This is valuable documentation and helps programmers avoid null
pointer exceptions that would crash Bazel. However, these annotations are
not checked by any tool. Instead, programmers have to do their best to
obey the @Nullable
specifications in the source code. This is
a lost opportunity, since documentation is most useful when it is
automatically processed and verified. (For several years, Google tried
using FindBugs, but they
eventually abandoned it: its analysis is too weak, suffering too many
false positives and false negatives.)
Despite the programmers' best efforts, null pointer exceptions do still creep into the code, impacting users. The Bazel developers would like to prevent these. They want a guarantee, at compile time, that no null pointer exceptions will occur at run time.
Such a tool already exists: the
Nullness
Checker of the Checker
Framework. It runs as a compiler plug-in, and it issues a warning at
every possible null pointer dereference. If it issues no warnings, the
code is guaranteed not to throw a NullPointerException
at run time.
The goal of this project is to do a large-scale case study of the Nullness
Checker on Bazel. The main goal is to understand how the Nullness Checker
can be used on a large-scale industrial codebase. How many lurking bugs
does it find? What
@Nullable
annotations are missing from the codebase because the developers failed to
write them? What are its limitations, such as code patterns that it cannot
recognize as safe? (You might create new analyses and incorporating them
into the Nullness Checker, or you might just reporting bugs to the Nullness
Checker developers for fixing.) What burdens does it place on users? Is
the cost-benefit tradeoff worth the effort — that is, should Google
adopt this tool more broadly? How should it be improved? Are the most
needed improvements in the precision of the analysis, or in the UI of the
tooling?
Annotate the BCEL library to express its contracts with respect to nullness. Show that the BCEL library has no null pointer exceptions (or find bugs in BCEL). There are already some annotations in BCEL, but they have not been verified as correct by running the Nullness Checker on BCEL. (Currently, those annotations are trusted when type-checking clients of BCEL.)
To get started:
git checkout typecheck-nullness
Some challenging aspects of this case study are:
copy()
method. Some implementations of copy()
return null, but
are not documented to do so. In addition, some implementations
of copy()
catch and ignore exceptions. I think it would
be nicest to change the methods to never return null, but to throw an
exception instead. (This is no more burdensome to users, who currently
have to check for null.) Alternately, the methods could all be
documented to return null.
Compiler writers have come to realize that clarity of error messages is as important as the speed of the executable (1, 2, 3 4). This is especially true when the language or type system has rich features.
The goal of this project is to improve a compiler's error messages. One example (not the only possible one) is the Checker Framework. Here are some distinct challenges:
@IndexFor("a")
annotation is syntactic sugar for
@NonNegative @LTLengthOf("a")
, and those types are
the ones that currently appear in error messages.
It would be good to show simpler types or ones that the user wrote.
It would be reasonable to start by improving the Index Checker's error messages, which frequently stymie users. Then, generalize the results to other type systems.
By default, the Checker Framework is unsound in several circumstances. “Unsound” means that the Checker Framework may report no warning even though the program can misbehave at run time.
The reason that the Checker Framework is unsound is that we believe that enabling these checks would cause too many false positive warnings: warnings that the Checker Framework issues because it cannot prove that the code is safe (even though a human can see that the code is safe). Having too many false positive warnings would irritate users and lead them not to use the checker at all, or would force them to simply disable those checks.
We would like to do studies of these command-line options to see whether our guess is right. Is it prohibitive to enable sound checking? Or can we think of enhancements that would let us turn on those checks that are currently disabled by default?
Many other tools exist for prevention of programming errors, such as Error Prone, NullAway, FindBugs, JLint, PMD, and IDEs such as Eclipse and IntelliJ. These tools are not as powerful as the Checker Framework (some are bug finders rather than verification tools, and some perform a shallower analysis), but they may be easier to use. Programmers who use these tools wonder, "Is it worth my time to switch to using the Checker Framework?"
The goal of this project is to perform a head-to-head comparison of as many different tools as possible. You will quantify:
This project will help programmers to choose among the different tools — it will show when a programmer should or should not use the Checker Framework. This project will also indicate how each tool should be improved.
One place to start would be with an old version of a program that is known to contain bugs. Or, start with the latest version of the program and re-introduce fixed bugs. (Either of these is more realistic than introducing artificial bugs into the program.) A possibility would be to use the Lookup program that has been used in previous case studies.
The Checker Framework is shipped with about 20 type-checkers. Users can create a new checker of their own. However, some users don't want to go to that trouble. They would like to have more type-checkers packaged with the Checker Framework for easy use.
Each of these projects requires you to design a new type system, implement it, and perform case studies to demonstrate that it is both usable and effective in finding/preventing bugs.
The Nullness Checker issues a false positive warning for this code:
import java.util.PriorityQueue; import org.checkerframework.checker.nullness.qual.NonNull; public class MyClass { public static void usePriorityQueue(PriorityQueue<@NonNull Object> active) { while (!(active.isEmpty())) { @NonNull Object queueMinPathNode = active.peek(); } } }
The Checker Framework does not determine that active.peek()
returns a non-null value in this context.
The contract of peek()
is that it returns a non-null value if the queue is not empty and the queue contains no null values.
To handle this code precisely, the Nullness Checker needs to know, for each queue, whether it is empty. This is analogous to how the Nullness Checker tracks whether a particular value is a key in a map.
It should be handled the same way: by adding a new subchecker, called the Nonempty Checker, to the Nullness Checker. Its types are:
@UnknownNonEmpty
— the queue might or might not be empty
@NonEmpty
— the queue is definitely non-empty
There is a start at this type-checker in branch nonempty-checker
. It:
However, it is not done. (In fact, it doesn't even compile.) For information about what needs to be done, see issue #399.
When you are done, the Nullness Checker should issue only the // ::
diagnostics from checker/tests/nullness/IsEmptyPoll.java
— no more and no fewer.
You can test that by running the Nullness Checker on the file, and when you are done you should delete the // @skip-test
line so that the file is run as part of the Checker Framework test suite.
Programs are easier to use and debug if their output is deterministic. For example, it is easier to test a deterministic program, because nondeterminism can lead to flaky tests that sometimes succeed and sometimes fail. As another example, it is easier for a user or programmer to compare two deterministic executions than two nondeterministic executions.
A number of Java methods return nondeterministic results, making any program that uses them potentially nondeterministic. Here are a few examples:
HashMap
s and HashSet
sLinkedHashMaps
and LinkedHashSets
File.list()
Object.toString()
, Object.hashCode()
new Random()
You can find more examples of non-deterministic specifications, and suggestions for how to avoid them, in the Randoop manual and in the ICST 2016 paper Detecting assumptions on deterministic implementations of non-deterministic specifications by A. Shi, A. Gyori, O. Legunsen, and D. Marinov, which presents the NonDex tool.
The NonDex tool works dynamically, which means that it cannot detect all user-visible nondeterminism nor give a guarantee of correctness — a guarantee that the program is deterministic from the user's point of view.
The goal of this project is to create a tool, based on a type system, that gives a guarantee. The tool would report to the user all possible nondeterminism in a program, so that the user can fix the program before it causes problems during testing or in the field.
More concretely, this problem can be handled by creating two simple type systems that indicate whether a given value is deterministic. In each diagram, the supertype appears above the subtype.
@PossiblyNonDeterministic @PossiblyNonDeterministicOrder | | @Deterministic @DeterministicOrder
The programmer would annotate routines that are expected to take deterministic inputs. (An example could be all printing routines.) Then, the type system would issue a warning whenever one of those routines is called on a possibly non-deterministic value.
The standard library would have annotations for
You can find a draft manual chapter that documents a possible design for a Determinism Checker. It differs slightly from the above proposal, for instance by having a single type hierarchy instead of two. That type system is implemented, so your best choice for this project is to do a case study of it, which could lead to design work to improve it.
The Checker Framework comes with a Tainting Checker that is so general that it is not good for much of anything. In order to be useful in a particular domain, a user must customize it:
@Tainted
and @Untainted
qualifiers
to something more specific (such as @Private
or @PaymentDetails
or @HtmlQuoted
), and
The first part of this project is to make this customization easier to do — preferably, a user does not have to change any code in the Checker Framework, as is currently the case for the Subtyping Checker. As part of making customization easier, a user should be able to specify multiple levels of taint — many information classification hierarchies have more than two levels (for example, the US government separates classified information into three categories: Confidential, Secret, and Top Secret).
The second part of this project is to provide several examples, and do case studies showing the utility of compile-time taint checking.
Possible examples include:
@PrivacySource
and @PrivacySink
annotations used by the Facebook Infer
static analyzer.
For some microbenchmarks, see the Juliette test suite for Java from CWE.
Windows cannot run command lines longer than 8191 characters. Creating a too-long command line causes failures when the program is run on Windows. These failures are irritating when discovered during testing, and embarrassing or worse when discovered during deployment. The same command line would work on Unix, which has longer command-line limits, and as a result developers may not realize that their change to a command can cause such a problem.
Programmers would like to enforce that they don't accidentally pass a
too-long string to the exec()
routine. The goal of this
project is to give a compile-time tool that provides such a guarantee.
Here are two possible solutions.
Simple solution: For each array and list, determine whether its length is known at compile time. The routines that build a command line are only allowed to take such constant-length lists, on the assumption that if the length is constant, its concatenation is probably short enough.
More complex solution:
For each String, have a compile-time estimate of its maximum length. Only
permit exec()
to be called on strings whose estimate is no more than 8191.
String concatenation would return a string whose estimated size is the sum
of the maximums of its arguments, and likewise for concatenating an array
or list of strings.
Overflow is when 32-bit arithmetic differs from ideal arithmetic. For
example, in Java the int
computation 2,147,483,647 + 1 yields
a negative number, -2,147,483,648. The goal of this project is to detect
and prevent problems such as these.
One way to write this is as an extension of the Constant Value Checker, which already keeps track of integer ranges. It even already checks for overflow, but it never issues a warning when it discovers possible overflow. Your variant would do so.
This problem is so challenging that there has been almost no previous
research on static approaches to the problem. (Two relevant papers are
IntScope:
Automatically Detecting Integer Overflow Vulnerability in x86 Binary Using
Symbolic Execution and
Integer Overflow
Vulnerabilities Detection in Software Binary Code.) Researchers are
concerned that users will have to write a lot of annotations indicating the
possible ranges of variables, and that even so there will be a lot of false
positive warnings due to approximations in the conservative analysis.
For example, will every loop that contains i++
cause a warning that i
might overflow?
That would not be acceptable: users would just disable the check.
You can convince yourself of the difficulty by manually analyzing programs to see how clever the analysis has to be, or manually simulating your proposed analysis on a selection of real-world code to learn its weaknesses. You might also try it on good and bad binary search code.
One way to make the problem tractable is to limit its scope: instead of
being concerned with all possible arithmetic overflow, focus on a specific
use case.
As one concrete application,
the Index
Checker is currently unsound in the presence of integer overflow. If
an integer i
is known to be @Positive
, and 1 is
added to it, then the Index Checker believes that its type
remains @Positive
. If i
was
already Integer.MAX_VALUE
, then the result is negative —
that is, the Index Checker's approximation to it is unsound.
This project involves removing this unsoundness by implementing a type system to track when an
integer value might overflow &mdash but this only matters for values that
are used as an array index.
That is, checking can be restricted to computations that involve an operand
of type @IntRange
).
Implementing such an analysis would permit the Index Checker
to extend its guarantees even to programs that might overflow.
This analysis is important for some indexing bugs in practice.
Using the Index Checker, we found 5 bugs in Google
Guava related to overflow. Google marked these as high priority and
fixed them immediately. In practice, there would be a run-time exception
only for an array of size approximately Integer.MAX_INT
.
You could write an extension of the Constant Value Checker, which already keeps track of integer ranges and even determines when overflow is possible. It doesn't issue a warning, but your checker could record whether overflow was possible (this could be a two-element type system) and then issue a warning, if the value is used as an array index. Other implementation strategies may be possible.
Here are some ideas for how to avoid the specific problem
of issuing a warning about potential overflow for every i++
in
a loop (but maybe other approaches are possible):
i == Integer.MAX_VALUE
before
incrementing. This wide-scale, disruptive code change is not
acceptable.
@ArrayLenRange(0, Integer.MAX_VALUE-1)
rather
than @UnknownVal
, which is equivalent
to @ArrayLenRange(0, Integer.MAX_VALUE-1)
. Now, every
array construction requires the client to establish that the length is
not Integer.MAX_VALUE
. I don't have a feel for whether
this would be unduly burdensome to users.
The Lock Checker prevents race conditions by ensuring that locks are held when they need to be. It does not prevent deadlocks that can result from locks being acquired in the wrong order. This project would extend the Lock Checker to address deadlocks, or create a new checker to do so.
Suppose that a program contains two different locks. Suppose that one thread tries to acquire lockA then lockB, and another thread tries to acquire lockB then lockA, and each thread acquires its first lock. Then both locks will wait forever for the other lock to become available. The program will not make any more progress and is said to be deadlocked.
If all threads acquire locks in the same order — in our example, say lockA then lockB — then deadlocks do not happen. You will extend the Lock Checker to verify this property.
The Index Checker is currently restricted to fixed-size data structures. A fixed-size data structure is one whose length cannot be changed once it is created; examples of fixed-size data structures are arrays and Strings. This limitation prevents the Index Checker from verifying indexing operations on mutable-size data structures, like Lists, that have add or remove methods. Since these kind of collections are common in practice, this is a severe limitation for the Index Checker.
The limitation is caused by the Index Checker's use of types that are dependent on the length of data structures,
like @LTLengthOf("data_structure")
. If data_structure
's length could change,
then the correctness of this type might change.
A naive solution would be to invalidate these types any time a method is called on data_structure
.
Unfortunately, aliasing makes this still unsound. Even more, a great solution to this problem would keep
the information in the type when a method like add or remove is called on data_structure
.
A more complete solution might involve some special annotations on List that permit the information to be persisted.
This project would involve designing and implementing a solution to this problem.
Verifying a program to be free of errors can be a daunting task. When starting out, a user may be more interested in bug-finding than verification. The goal of this project is to create a nullness bug detector that uses the powerful analysis of the Checker Framework and its Nullness Checker, but omits some of its more confusing or expensive features. The goal is to create a fast, easy-to-use bug detector. It would enable users to start small and advance to full verification in the future, rather than having to start out doing full verification.
This could be structured as a new NullnessLight Checker, or as a command-line argument to the current Nullness Checker. Here are some differences from the real Nullness checker:
Map.get
, the given key appears in the map.@Pure
: it returns
the same value on every call.@NonNull
.Each of these behaviors should be controlled by its own command-line argument, as well as being enabled in the NullnessLight Checker.
The implementation may be relatively straightforward, since in most cases the behavior is just to disable some functionality of existing checkers.
Tools such as FindBugs, NullAway, NullnessLight, and the Nullness Checker form a spectrum from easy-to-use bug detectors to sound verification. NullnessLight represents a new point in the design space. It will be interesting to compare these checkers:
Uber's NullAway tool is also
an implementation of this idea (that is, a fast, but incomplete and
unsound, nullness checker). NullAway doesn't let the user specify Java
Generics: it assumes that every type parameter is @NonNull
.
Does Uber's tool provide users a good
introduction to the ideas that a user can use to transition to a nullness
type system later?
This project is to improve support for typestate checking
Ordinarily, a program variable has
the same type throughout its lifetime from when the variable is declared
until it goes out of scope. “Typestate”
permits the type of an object or variable to change in a controlled way.
Essentially, it is a combination of standard type systems with dataflow
analysis. For instance, a file object changes from unopened, to opened, to
closed; certain operations such as writing to the file are only permitted
when the file is in the opened typestate. Another way of saying this is
that write
is permitted after open
, but not after close
.
Typestate
is applicable to many other types of software properties as well.
Two typestate checking frameworks exist for the Checker Framework. Neither is being maintained; a new one needs to be written.
We also welcome your ideas for new type systems. For example, any run-time failure can probably be prevented at compile time with the right analysis. Can you come up with a way to fix your pet peeve?
It is easiest, but not required, to choose an existing type system from the literature, since that means you can skip the design stage and go right to implementation.
This task can be simple or very challenging, depending on how ambitious the type system is. Remember to focus on what helps a software developer most!
The JDK is the most important library to annotate, and the Checker
Framework ships with partial annotations for it.
These are scattered in multiple locations: in multiple subdirectories of
checker/jdk/
,
and in files jdk.astub
under
checker/src/org/checkerframework/checker/
The goal of this project is to put the annotations in a single place: a new clone of the JDK repository. The effort can be partially annotated — say, by using the Annotation File Utilities to move annotations around, or enhancing the Annotation File Utilities if they are not up to the task) and partially done manually.
A number of type annotations take, as an
argument, a
Java expression. The parser for these is a hack. The goal of this
project is to replace it by calls
to JavaParser.
For example, the FlowExpressions.Receiver
class, which
represents an AST, should be replaced by the JavaParser AST.
This task should be straightforward, since JavaParser is already used in other parts of the Checker Framework.
The
Annotation
File Utilities, or AFU, insert annotations into, and extract
annotations from, .java
files, .class
files,
and text files. These programs were written before the
ASM bytecode library supported Java 8's
type annotations. Therefore, the AFU has its own custom version of ASM
that supports type annotations. Now that ASM 6 has been released and it
supports type annotations, the AFU needs to be slightly changed to use
the official ASM 6 library instead of its own custom ASM variant.
This project is a good way to learn about .class
files and
Java bytecodes: how they are stored, and how to manipulate them.
Many program analyses are too verbose for a person to read their entire output. However, after a program change, the analysis results may change only slightly. An "analysis diff" tool could show the difference between the analysis run on the old code and the analysis run on the new code.
The analysis diff tool would take as input two analysis results (the previous and the current one). It would output only the new parts of its second input. (It could optionally output a complete diff between two analysis results.)
One challenge is dealing with changed line numbers and other analysis output differences between runs.
It would be nice to integrate the tool with git pre-commit hooks or GitHub pull requests, to enable either of the following functionality (for either commits to master or for pull requests):
A concrete example of an analysis diff tool is checklink-persistent-errors; see the documentation at the top of the file. That tool only works for one particular analysis, the W3C Link Checker. An analysis diff tool also appears to be built into FindBugs. The goal of this project is to build a general-purpose tool that is easy to apply to new analyses.
A type system is useful because it prevents certain errors. The downside of a type system is the effort required to write the types. Type inference is the process of writing the types for a program.
Type-checking is a modular, or local, analysis. For example, given a procedure in which types have been written, a type-checker can verify the procedure's types without examining the implementation of any other procedure.
By contrast, type inference is a non-local, whole-program analysis. For example, to determine what type should be written for a procedure's formal parameter, it is necessary to examine the type of the argument at every call to that procedure. At every call, to determine the type of some argument A, it may be necessary to know the types of the formal parameters to the procedure that contains A, and so forth. It is possible to resolve this seemingly-infinite regress, but only by examining the entire program in the worst case.
The differences between type checking and type inference means that they are usually written in very different ways. Type inference is usually done by first collecting all of the constraints for the entire program, then passing them to a specialized solver. Writing a type inference tool is harder. Worst of all, it's annoying to encode all the type rules twice in different ways: once for the type checker and once for the type inference.
As a result, many type systems have a type checker but no type inference tool. This makes programmers reluctant to use these type systems, which denies programmers the benefits of type-checking.
The goal of this project is to automatically create type inference tools from type-checking tools, so that it is not necessary for the type system designer to implement the type system twice in different ways.
A key insight is that the type-checker already encodes all knowledge about what is a legal, well-typed program. How can we exploit that for the purpose of type inference as well as type-checking? The idea is to iteratively run the type-checker, multiple times, observing what types are passed around the program and what errors occur. Each iteration collects more information, until there is nothing more to learn.
This approach has some disadvantages: it is theoretically slower, and theoretically less accurate, than a purpose-built type inference tool for each type system. However, it has the major advantage that it requires no extra work to implement a type inference tool. Furthermore, maybe it works well enough in practice.
A prototype implementation of this idea already exists, but it needs to be evaluated in order to discover its flaws, improve its design, and discover how accurate it is in practice.
A valuable project all by itself would be to compare heavy-weight and light-weight type inference this whole-program inference vs. Checker Framework Inference vs. Julia, to understand when each one is worth using.
The Checker Framework's dataflow framework (manual here) implements flow-sensitive type refinement (local type inference) and other features. It is used in the Checker Framework and also in Error Prone, NullAway, and elsewhere.
There are a number of open issues — both bugs and feature requests — related to the dataflow framework. The goal of this project is to address as many of those issues as possible, which will directly improve all the tools that use it.
A program analysis technique makes estimates about the current values of expressions. When a method call occurs, the analysis has to throw away most of its estimates, because the method call might change any variable. If the method is known to have no side effects, then the analysis doesn't need to throw away its estimates, and the analysis is more precise.
For example, the Checker Framework unsoundly trusts but does not check purity annotations. This makes the system vulnerable to programmer mistakes when writing annotations. The Checker Framework contains a sound checker for immutability annotations, but it suffers too many false positive warnings and thus is not usable. A better checker is necessary. It will also incorporate aspects of an escape analysis.
Choosing an algorithm from the literature is the best choice, but there still might be research work to do: in the past, when implementing algorithms from research papers, we have sometimes found that they did not work as well as claimed, and we have had to enhance them. One challenge is that any technique used by pluggable type-checking to verify immutability must be modular, but many side effect analyses require examining the whole program. The system should require few or no method annotations within method bodies. I'm not sure whether such a system already exists or we need to design a new one.
Perhaps one of these existing side effect analyses could be used: https://github.com/Sable/soot/wiki/Using-Side-Effect-Attributes http://www2.informatik.uni-freiburg.de/~geffken/GeffkenST14.pdf
Currently, type annotations are only displayed in Javadoc if they are explicitly written by the programmer. However, the Checker Framework provides flexible defaulting mechanisms, reducing the annotation overhead. This project will integrate the Checker Framework defaulting phase with Javadoc, showing the signatures after applying defaulting rules.
There are other type-annotation-related improvements to Javadoc that can be explored, e.g. using JavaScript to show or hide only the type annotations currently of interest.
The Checker Framework runs much slower than the standard javac compiler — often 20 times slower! This is not acceptable as part of a developer's regular process, so we need to speed up the Checker Framework. This project involves determining the cause of slowness in the Checker Framework, and correcting those problems.
This is a good way to learn about performance tuning for Java applications.
Some concrete tasks include:
Element
s. Interning
could save time when doing comparisons. You can verify the correctness
of the optimization by running the
Interning
Checker on the Checker Framework code. Compare the run time of the
Checker Framework before and after this optimization.
Implement run-time checking to complement compile-time checking. This will let users combine the power of static checking with that of dynamic checking.
Every type system is too strict: it rejects some programs that never go wrong at run time. A human must insert a type loophole to make such a program type-check. For example, Java takes this approach with its cast operation (and in some other places).
When doing type-checking, it is desirable to automatically insert run-time checks at each operation that the static checker was unable to verify. (Again, Java takes exactly this approach.) This guards against mistakes by the human who inserted the type loopholes. A nice property of this approach is that it enables you to prevent errors in a program with no type annotations: whenever the static checker is unable to verify an operation, it would insert a dynamic check. Run-time checking would also be useful in verifying whether the suppressed warnings are correct — whether the programmer made a mistake when writing them.
The annotation processor (the pluggable type-checker) should automatically insert the checks, as part of the compilation process.
There should be various modes for the run-time checks:
The run-time penalty should be small: a run-time check is necessary only at the location of each cast or suppressed warning. Everywhere that the compile-time checker reports no possible error, there is no need to insert a check. But, it will be an interesting project to determine how to minimize the run-time cost.
Another interesting, and more challenging, design question is whether you need to add and maintain a run-time representation of the property being tested. It's easy to test whether a particular value is null, but how do you test whether it is tainted, or should be treated as immutable? For a more concrete example, see the discussion of the (not yet implemented) [Javari run-time checker](http://pag.csail.mit.edu/pubs/ref-immutability-oopsla2005-abstract.html). Adding this run-time support would be an interesting and challenging project.
We developed a prototype for the EnerJ runtime system. That code could be used as starting point, or you could start afresh.
In the short term, this could be prototyped as a source- or bytecode-rewriting approach; but integrating it into the type checker is a better long-term implementation strategy.
The Checker Framework comes with support for external tools, including both IDEs (such as an Eclipse plug-in) and build tools (instructions for Maven, etc.).
These plug-ins and other integration should be improved. We have a number of concrete ideas, but you will also probably come up with some after a few minutes of using the existing IDE plugins!
This is only a task for someone who is already an expert, such as someone who has built IDE plugins before or is very familiar with the build system. One reason is that these tools tend to be complex, which can lead to subtle problems. Another reason is that we don't want to be stuck maintaining code written by someone who is just learning how to write an IDE plugin.
Rather than modifying the Checker Framework's existing support or building new support from scratch, it may be better to adapt some other project's support for build systems and IDEs. For instance, you might make coala support the Checker Framework, or you might adapt the tool integration provided by Error Prone.
Design and implement an algorithm to check type soundness of a type system by exhaustively verifying the type checker on all programs up to a certain size. The challenge lies in efficient enumeration of all programs and avoiding redundant checks, and in knowing the expected outcome of the tests. This approach is related to bounded exhaustive testing and model checking; for a reference, see [Efficient Software Model Checking of Soundness of Type Systems](http://www.eecs.umich.edu/~bchandra/publications/oopsla08.pdf).