Skip to main content
eScholarship
Open Access Publications from the University of California

ICS Technical Reports

ICS has a technical report series, three academic departments and 3 ORUs associated with it that each generate new information and knowledge.

Cover page of JPloy : user-centric deployment support in a component platform

JPloy : user-centric deployment support in a component platform

(2004)

Based on a vision that, in the future, applications will be flexibly built out of small-grained components, we argue that current technologies do not adequately support component deployment in such a setting. Specifically, current technologies realize deployment processes where most decisions are made by the application manufacturer. When using small-grained components, however, the component user needs to have more control over the deployment process; user-centric deployment is needed. In this paper, we describe our initial efforts at providing user-centric deployment. We present JPloy, a prototypical tool that gives a user more control about the configuration of installed Java components. JPloy extends the Java class loader so that custom configurations can be applied to existing components, without having to modify the components themselves. For example, name space or versioning conflicts among components can be elegantly resolved in this way. We demonstrate JPloy by applying it to an example application.

Cover page of Stream-processing point data

Stream-processing point data

(2004)

With the fast increasing size of captured 3D models, i.e. from high-resolution laser range scanning devices, it has become more and more important to provide basic point processing methods for large raw point data sets. In this paper we present a novel stream-based point processing framework that orders unorganized raw points along a spatial dimension and processes them sequentially. The major advantage of our novel concept is its extremely low main memory usage and its applicability to process very large data sets out-of-core in a sequential order. Furthermore, the framework supports local operators and is extensible to concatenate multiple operators successively.

Cover page of SSA-based Java bytecode verification

SSA-based Java bytecode verification

(2004)

Java bytecode is commonly verified prior to execution. The standard verifier is designed as a black-box component that either accepts or rejects its input. Internally, it uses an iterative data-flow analysis to trace definitions of values to their uses to ensure type safety. The results of the data-flow analysis are discarded once that verification has completed. In many JVMs, this leads to a duplication of work, since definition-use chains will be computed all over again during just-in-time compilation. We introduce a novel bytecode verification algorithm that verifies bytecode via Static Single Assignemnt (SSA) form construction. The resulting SSA representation can immediately be used for optimization and code generation. Our prototype implementation takes less time to transform bytecode into SSA form and verify it than it takes Sun's verifier to merely confirm the validity of Java bytecode, with the added benefit that SSA is available "for free" to later compilation stages.

Cover page of Cached geometry manager for view-dependent LOD rendering

Cached geometry manager for view-dependent LOD rendering

(2004)

The new generation of commodity graphics cards with significant on-board video memory has become widely popular and provides high-performance rendering and flexibility. One of the features to be exploited is the use of the on-board video memory to store geometry information. This strategy significantly reduces the data transfer overhead from sending geometry data over the (AGP) bus interface from main memory to the graphics card. However, taking advantage of cached geometry is not a trivial task because the data models often exceed the memory size of the graphics card. In this paper we present a dynamic cached geometry manager (CGM) to address this issue. We show how this technique improves the performance of real-time view-dependent level-of-detail (LOD) selection and rendering algorithms of large data sets. Our approach has been analyzed over two view-dependent progressive mesh (VDPM) frameworks: one for rendering of arbitrary manifold 3D meshes, and one for terrain visualization.

Cover page of Exploiting relationships for data cleaning

Exploiting relationships for data cleaning

(2004)

In this paper we address the problem of data cleaning when multiple data sources are merged to create a single database. Specifically, we focus on the problem of determining if two representations in two different sources refer to the same entity. Current research has focused on linking records from different sources by computing the similarity among them based on their attribute values. Our approach explores a new research direction by exploiting relationships among records for the purpose of cleaning. Our approach is based on the hypothesis that if two representations refer ti the same entity, there is a high likelihood that they are strongly connected to each other through multiple relationships implicit in the database. We view the database as a graph in which nodes correspond to entities and edges to relationships among the entities. Any one of the existing conventional approaches is first used to determine possible matches among entities. Graph analysis techniques are then used to disambiguate among the various choices. While out approach is domain independent, it can be tuned to specific domains by incorporating domain specific rules. We demonstrate the applicability of our method to a large real dataset.

Cover page of Point light fields for point rendering systems

Point light fields for point rendering systems

(2003)

In this paper, we introduce the concept of point light fields for point-based rendering systems, analogous to surface light fields for polygonal rendering systems. We evaluate two representations, namely, singular value decomposition and spherical harmonics for their representation accuracy, storage efficiency, and real-time reconstruction of point light fields. We have applied our algorithms to both real-world and synthetic point based models. We show the results of our algorithm using an advanced point based rendering system.

Cover page of A practical mobile-code format with linear verification effort

A practical mobile-code format with linear verification effort

(2003)

We present an abstract machine that encodes both type safety and control safety in an efficient manner and that is suitable as a mobile-code format. At the code consumer, a single linear-complexity algorithm performs not only verification, but simultaneously also transforms the stack-based wire format into a register-based internal format. The latter is beneficial for interpretation and native code generation. Our dual-representation approach overcomes some of the disadvantages of existing mobile-code representations, such as the JVM and CLR wire formats.

Cover page of Proofing : an efficient and safe alternative to mobile-code verification

Proofing : an efficient and safe alternative to mobile-code verification

(2003)

The safety of the Java Virtual Machine is founded on bytecode verification. Although verification complexity appears to roughly correlate with program size in. the average case, its worst-case behavior is quadratic. This can be exploited for denial-of-service attacks using relatively short programs (applets or agents) specifically crafted to keep the receiving virtual machine's verifier busy for an inordinate amount of time. Instead of the existing, quadratic-complexity verification algorithm, which needs to decide the validity of any given bytecode program, we present a linear-complexity alternative that merely ensures that no unsafe program is ever passed on to the virtual machine. Hence, in certain cases, our algorithm will modify an unsafe bytecode program to make it safe, a process that we call 'proofing". Proofing does not change the semantics of programs that would have passed the original bytecode verifier. For programs that would have failed verification, our algorithm will, in linear time, either reject them, or transform them into programs (of unspecified semantics) that are guaranteed to be safe. Our method also solves a long-standing problem, in which for certain perfectly legal Java source programs the bytecodes produced by Java compilers are erroneously rejected by existing verifiers.

Cover page of A denial of service attack on the Java bytecode verifier

A denial of service attack on the Java bytecode verifier

(2003)

Java Bytecode Verification was so far mostly approached from a correctness perspective. Security vulnerabilities have been found repeatedly and were corrected shortly thereafter. However, correctness is not the only potential point of failure in the verifier idea. In this paper we construct Java code, which is correct, but requires an excessive amount of time to prove safety. In contrast to previous flaws in the bytecode verifier, the enabling property for this exploit lies in the verification algorithm itself and not in the implementation and is thus not easily fixable. We explain how this architectural weakness could be exploited for denial-of-service attacks on JVM-based services and devices.

Cover page of Supporting software composition at the programming-language level

Supporting software composition at the programming-language level

(2003)

We are in the midst of a paradigm shift toward component-oriented software development, and significant progress has been made in understanding and harnessing this new paradigm. Somewhat strangely then, the new paradigm does not currently extend all the way down to how the components themselves are constructed. While we have composition architectures and languages that describe how systems are put together out of such atomic program parts, the parts themselves are still constructed based on a previous paradigm, object-oriented programming. We argue that this represents a mismatch that is holding back compositional software design: many of the assumptions that underly object-oriented systems simply do not apply in the open and dynamic contexts of component software environments. What, then, would a programming language look like that supported component-oriented programming at the smallest granularity? Our project to develop such a language, Lagoona, tries to provide an answer to this question. This paper motivates the new key concepts behind Lagoona and briefly describes their realization (using Lagoona itself as the implementation language) in the context of Microsoft's .NET environment.